Comment by HarHarVeryFunny
1 month ago
Yes, if all you did was replace current pre/mid/post training with a new (elusive holy grail) runtime continual learning algorithm, then it would definitely still just be a language model. You seem to be talking about it having TWO runtime continual learning algorithms, next-token and long-horizon RL, but of course RL is part of what we're calling an LLM.
It's not obvious if you just did this without changing the learning objective from self-prediction (auto-regressive) to external prediction whether you'd actually gain much capability though. Auto-regressive training is what makes LLMs imitators - always trying to do same as before.
In fact, if you did just let a continual learner autonomously loose in some virtual environment, why would you expect it do do anything different, other than continual learning from whatever it was exposed to in the environment, from putting a current LLM in a loop, together with tool use as a way to expose it to new data? An imitative (auto-regressive) LLM doesn't have any drive to do anything new - if you just keep feeding it's own output back in as an input, then it's basically just a dynamical system that will eventually settle down into some attractor states representing the closure of the patterns it has learnt and is generating.
If you want the model to behave in a more human/animal-like self-motivated agentic fashion, then I think the focus has to be on learning how to act to control and take advantage of the semi-predictable environment, which is going to be based on having predicting the environment as the learning objective (vs auto-regressive), plus some innate drives (curiosity, boredom, etc) to bias behavior to maximize learning and creative discovery.
Continual learning also isn't going to magically solve the RL reward problem (how do you define and measure RL rewards in the general, non-math/programming, case?). In fact post-training is a very human-curated affair since humans have identified math and programming as tasks where this works and have created these problem-specific rewards. If you wanted the model to discover it's own rewards at runtime, as part of your new runtime RL algorithm perhaps, then you'd have to figure how to bake that into the architecture.
No. There are no architectural changes and no "second runtime learning algorithm". There's just the good old in-context learning that all LLMs get from pre-training. RLVR is a training stage that pressures the LLM to take advantage of it on real tasks.
"Runtime continual learning algorithm" is an elusive target of questionable desirability - given that we already have in-context learning, and "get better at SFT and RLVR lmao" is relatively simple to pull off and gives kickass gains in the here and now.
I see no reason why "behave in a more human/animal-like self-motivated agentic fashion" can't be obtained from more RLVR, if that's what you want to train your LLMs for.
I'm not sure what you are saying. There are LLMs as exist today, and there are any number of changes one could propose to make to them.
The less you change, the more they stay the same. If you just add "more" RLVR (perhaps for a new domain - maybe chemistry vs math or programming?), then all you will get is an LLM that is better at acing chemistry reasoning benchmarks.
I'm saying that the kind of changes you propose aren't made by anyone, and might generally not be worth making. Because "better RLVR" is an easier and better pathway to actual cross-domain performance gains.
If you could stabilize the kind of mess you want to make, you could put that effort into better RL objectives and get more return.
5 replies →