← Back to context

Comment by YeGoblynQueenne

19 days ago

>> Current language models do not handle context this way. They rely primarily on parametric knowledge—information compressed into their weights during massive pre-training runs. At inference time, they function largely by recalling this static, internal memory, rather than actively learning from new information provided in the moment.

>> This creates a structural mismatch. We have optimized models to excel at reasoning over what they already know yet users need them to solve tasks that depend on messy, constantly evolving context. We built models that rely on what they know from the past, but we need context learners that rely on what they can absorb from the environment in the moment.

>> To bridge this gap, we must fundamentally change our optimization direction.

All this is right and what critics of the deafening over-hyping of LLMs have long pointed out.

So what do the authors propose we do? Currently, they propose ... a benchmark. But, what's that going to achieve? We know very well that LLMs, neural nets in general, are masters at saturating benchmarks without actually mastering the abilities that the benchmarks are meant to be measuring.

What happens if in a year or so, as will definite happen, LLMs saturate this benchmark too? Will we all have to agree that LLMs can now do "context learning" and then move on to the next big thing? This year it's "world models", last year it was "reasoning" and next year it's going to be "context learning"? And then, what? Where is this all leading to, if after all the billions spent and all the benchmarks beaten conclusively, LLMs still can't do reasoning, can't do world-modelling, can't do context learning and so on, and so forth?

> Where is this all leading to, if after all the billions spent and all the benchmarks beaten conclusively, LLMs still can't do reasoning, can't do world-modelling, can't do context learning and so on, and so forth?

Humans completely displaced from the workforce while they harp "but LLMs can't really think and don't really have creativity!"

  • Humans are being displaced because moronic business magnates are trying to force feed the country their wares but failing spectacularly so now they are forcing governments across the world to buy their wares under threat of the US government.

  • I think that’s more of a reflection on what employers are willing to pay for…

    • Agreed, but does it matter? LLMs will affect us by how much they displace the parts of us employers are willing to pay for. The "but they don't really think!" is just a cope.