Comment by 4gotunameagain
1 day ago
The very idea that AGI will arise from LLMs is ridiculous at best.
Computer science hubris at its finest.
1 day ago
The very idea that AGI will arise from LLMs is ridiculous at best.
Computer science hubris at its finest.
Why is it ridiculous that an LLM or a system similar to or built off of an LLM could reach AGI?
Because intelligence is so much more than stochastically repeating stuff you've been trained on.
It needs to learn new information, create novel connections, be creative.. We are utterly clueless as to how the brain works and how intelligence is created.
We just took one cell, a neuron, made the simplest possible model of it, made some copies of it and you think it will suddenly spark into life by throwing GPUs at it ?
>It needs to learn new information, create novel connections, be creative.
LLM's can do all those things
1 reply →
If AGI is built from LLMs, how could we trust it? It's going to "hallucinate", so I'm not sure that this AGI future people are clamoring for is going to really be all that good if it is built on LLMs.
Because LLMs are just stochastic parrots and don't do any thinking.
Humans who repeatedly deny LLM capabilities despite the numerous milestones they've surpassed seem more like stochastic parrots.
The same arguments are always brought up, often short pithy one-liners without much clarification. It seems silly that despite this argument first emerging when LLM's could barely write functional code, now that LLM's have reached gold-medal performance on the IMO, it is still being made with little interrogation into its potential faults, or clarification on the precise boundary of intelligence LLM's will never be able to cross.
3 replies →