Comment by HarHarVeryFunny
3 months ago
I think continual learning is a lot different than memory consolidation. Learning isn't the same as just stacking memories, and anyways LLMs aren't learning the right thing - to create human/animal-like intelligence requires predicting the outcomes of actions, not just auto-regressive continuations.
Continual learning, resulting in my AI being different from yours, because we've both got them doing different things, is also likely to turn the current training and deployment paradigm on it's head.
I agree we'll get there one day, but I expect we'll spend the next decade exploiting LLMs before there is any serious effort more on to new architectures.
In the meantime, DeepMind for one have indicated they will try to build their version of "AGI" with an LLM as a component of it, but it remains to be seen exactly what they end up building and how much new capability that buys. In the long term building in language as a component, rather than building in the ability to learn language, and everything else that humans are capable of learning, is going to prove a limitation, and personally I wouldn't call it AGI until we do get to that level of being able to learn everything that a human can.
No comments yet
Contribute on Hacker News ↗