Comment by HarHarVeryFunny
1 year ago
Yes, an LLM can be periodically retrained, which is what is being done today, but a human level AGI needs to be able to learn continuously.
If we're trying something new and make a mistake, then we need to seamlessly learn from the mistake and continue - explore the problem and learn from successes and failures. It wouldn't be much use if your "AGI" intern stopped at it's first mistake and said "I'll be back in 6 months after I've been retrained not to make THAT mistake".
I don't think there's a single way that we learn things, there's too much variety in how, when and why things are committed to memory and still more of a difference with things that actually update our thinking process or world model. We forget the overwhelming majority of sense perceptions immediately and even when we are intentionally trying to learn something we will fail to recall it even a few seconds after we see it. Even when we succeed in short term recall the thing we have "learnt" may be gone the next day or we may only recall it correctly some small number of times out of many attempts. Contrary to that some things are immediately and permanently ingrained in our minds if they are extremely impactful in some way or sometimes for no apparent reason at all. It's too deep of a topic to go into but all this is to say that it isn't so simple as to say that continued pretraining of an LLM is completely dissimilar to how humans learn, in fact the question and answer style of fine tuning that is so widely used to add new knowledge or steer a model to respond in a certain way is extremely similar to how humans learn e.g. quizzing or testing with immediate feedback and repeating the process with many samples that vary their wording while still pertaining to the same information is one of the best ways for people to memorize information.