Comment by HarHarVeryFunny
1 day ago
I think if we're considering the nature of intelligence, pursuant to trying to replicate it, then the focus needs to be more evolutionary and functional, not the behavior of lazy modern humans who can get most of their survival needs met at Walmart or Amazon!
The way that animals (maybe think apes and dogs, etc, not just humans) learn is by observing and interacting. If something is new or behaves in unexpected ways then "prediction failure", aka surprise, leads to them focusing on it and interacting with it, which is the way evolution has discovered for them to learn more about it.
Yes, an LLM has some agency via tool use, and via tool output it can learn/verify to some extent, although without continual learning this is only of ephemeral value.
This is all a bit off topic to my original point though, which is the distinction between trying to learn from 2nd hand conflicting heresay (he said, she said) vs having the ability to learn the truth for yourself, which starts with being built to predict the truth (external real-world) rather than being built to predict statistical "he said, she said" continuations. Sure, you can mitigate a few of an LLM's shortcomings by giving them tools etc, but fundamentally they are just doing the wrong thing (self-prediction) if you are hoping for them to become AGI rather than just being language models.
No comments yet
Contribute on Hacker News ↗