Because intelligence is so much more than stochastically repeating stuff you've been trained on.
It needs to learn new information, create novel connections, be creative.. We are utterly clueless as to how the brain works and how intelligence is created.
We just took one cell, a neuron, made the simplest possible model of it, made some copies of it and you think it will suddenly spark into life by throwing GPUs at it ?
Nope. Can't learn anything after the training data, only within the very narrow context window.
Any novel connections are through randomness, hence hallucinations instead of useful connections with background knowledge of involved systems or concepts.
About creativity, see my previous point. If I spit out words that go next to eachother, it won't be creativity. Creativity implies a goal, a purpose, or sometimes by chance, but utilising systematic thinking with understanding of the world.
If AGI is built from LLMs, how could we trust it? It's going to "hallucinate", so I'm not sure that this AGI future people are clamoring for is going to really be all that good if it is built on LLMs.
Humans who repeatedly deny LLM capabilities despite the numerous milestones they've surpassed seem more like stochastic parrots.
The same arguments are always brought up, often short pithy one-liners without much clarification. It seems silly that despite this argument first emerging when LLM's could barely write functional code, now that LLM's have reached gold-medal performance on the IMO, it is still being made with little interrogation into its potential faults, or clarification on the precise boundary of intelligence LLM's will never be able to cross.
Because intelligence is so much more than stochastically repeating stuff you've been trained on.
It needs to learn new information, create novel connections, be creative.. We are utterly clueless as to how the brain works and how intelligence is created.
We just took one cell, a neuron, made the simplest possible model of it, made some copies of it and you think it will suddenly spark into life by throwing GPUs at it ?
>It needs to learn new information, create novel connections, be creative.
LLM's can do all those things
Nope. Can't learn anything after the training data, only within the very narrow context window.
Any novel connections are through randomness, hence hallucinations instead of useful connections with background knowledge of involved systems or concepts.
About creativity, see my previous point. If I spit out words that go next to eachother, it won't be creativity. Creativity implies a goal, a purpose, or sometimes by chance, but utilising systematic thinking with understanding of the world.
If AGI is built from LLMs, how could we trust it? It's going to "hallucinate", so I'm not sure that this AGI future people are clamoring for is going to really be all that good if it is built on LLMs.
Because LLMs are just stochastic parrots and don't do any thinking.
Humans who repeatedly deny LLM capabilities despite the numerous milestones they've surpassed seem more like stochastic parrots.
The same arguments are always brought up, often short pithy one-liners without much clarification. It seems silly that despite this argument first emerging when LLM's could barely write functional code, now that LLM's have reached gold-medal performance on the IMO, it is still being made with little interrogation into its potential faults, or clarification on the precise boundary of intelligence LLM's will never be able to cross.
Which novel idea have LLMs brought forward so far?
1 reply →
Call me back when LLMs stop "hallucinating" constantly.