← Back to context

Comment by atleastoptimal

2 days ago

Far more is being done than simply throwing more GPU's at the problem.

GPT-5 required less compute to train than GPT-4.5. Data, RL, architectural improvements, etc. all contribute to the rate of improvement we're seeing now.

The very idea that AGI will arise from LLMs is ridiculous at best.

Computer science hubris at its finest.

  • Why is it ridiculous that an LLM or a system similar to or built off of an LLM could reach AGI?

    • Because intelligence is so much more than stochastically repeating stuff you've been trained on.

      It needs to learn new information, create novel connections, be creative.. We are utterly clueless as to how the brain works and how intelligence is created.

      We just took one cell, a neuron, made the simplest possible model of it, made some copies of it and you think it will suddenly spark into life by throwing GPUs at it ?

      2 replies →

    • If AGI is built from LLMs, how could we trust it? It's going to "hallucinate", so I'm not sure that this AGI future people are clamoring for is going to really be all that good if it is built on LLMs.