← Back to context

Comment by 3form

7 days ago

You're observing this "paradox", because what you call learning here is not learning in the ML sense; it's deriving better conclusions from more data. It's true for many ML methods, but it doesn't mean any actual learning happens.

There's another phenomenon. It's called pedantic denialism. Deriving conclusions from more data is the same thing as learning. You learned something from the new data hence the new conclusion. As long as that context window survives the LLM has learned.