Comment by ordersofmag
14 hours ago
I will find this often-repeated argument compelling only when someone can prove to me that the human mind works in a way that isn't 'combining stuff it learned in the past'.
5 years ago a typical argument against AGI was that computers would never be able to think because "real thinking" involved mastery of language which was something clearly beyond what computers would ever be able to do. The implication was that there was some magic sauce that human brains had that couldn't be replicated in silicon (by us). That 'facility with language' argument has clearly fallen apart over the last 3 years and been replaced with what appears to be a different magic sauce comprised of the phrases 'not really thinking' and the whole 'just repeating what it's heard/parrot' argument.
I don't think LLM's think or will reach AGI through scaling and I'm skeptical we're particularly close to AGI in any form. But I feel like it's a matter of incremental steps. There isn't some magic chasm that needs to be crossed. When we get there I think we will look back and see that 'legitimately thinking' wasn't anything magic. We'll look at AGI and instead of saying "isn't it amazing computers can do this" we'll say "wow, was that all there is to thinking like a human".
> 5 years ago a typical argument against AGI was that computers would never be able to think because "real thinking" involved mastery of language which was something clearly beyond what computers would ever be able to do.
Mastery of words is thinking? In that line of argument then computers have been able to think for decades.
Humans don't think only in words. Our context, memory and thoughts are processed and occur in ways we don't understand, still.
There's a lot of great information out there describing this [0][1]. Continuing to believe these tools are thinking, however, is dangerous. I'd gather it has something to do with logic: you can't see the process and it's non-deterministic so it feels like thinking. ELIZA tricked people. LLMs are no different.
[0] https://archive.is/FM4y8 [0] https://www.theverge.com/ai-artificial-intelligence/827820/l... [1] https://www.raspberrypi.org/blog/secondary-school-maths-show...
Mastery of words is thinking?
That's the crazy thing. Yes, in fact, it turns out that language encodes and embodies reasoning. All you have to do is pile up enough of it in a high-dimensional space, use gradient descent to model its original structure, and add some feedback in the form of RL. At that point, reasoning is just a database problem, which we currently attack with attention.
No one had the faintest clue. Even now, many people not only don't understand what just happened, but they don't think anything happened at all.
ELIZA, ROFL. How'd ELIZA do at the IMO last year?
So people without language cannot reason? I don't think so.
3 replies →
> ELIZA, ROFL. How'd ELIZA do at the IMO last year?
What's funny is the failure to grasp any contextual framing of ELIZA. When it came out people were impressed by it's reasoning, it's responses. And in your line of defense it could think because it had mastery of words!
But fast forward the current timeline 30 years. You will have been of the same camp that argued on behalf of ELIZA when the rest of the world was asking, confusingly: how did people think ChatGPT could think?
1 reply →
> I will find this often-repeated argument compelling only when someone can prove to me that the human mind works in a way that isn't 'combining stuff it learned in the past'.
This is the definition of the word ‘novel’.