Comment by NitpickLawyer

16 hours ago

a) no, gemini 2.5 was shown to "win" gold w/o tools. - https://arxiv.org/html/2507.15855v1

b) reductionism isn't worth our time. Planning works in the real world, today. (try any agentic tool like cc/codex/whatever). And if you're set on the purist view, there's mounting evidence from anthropic that there is planning in the core of an LLM.

c) so ... not true? Long context works today.

This is simply moving goalposts and nothing more. X can't do Y -> well, here they are doing Y -> well, not like that.

a) That "no-tools" win depends on prompt orchestration which can still be categorized as tooling.

b) Next-token training doesn’t magically grant inner long-horizon planners..

c) Long context ≠ robust at any length. Degradation with scale remains.

Not moving goalposts, just keeping terms precise.

  • My man, you're literally moving all the goalposts as we speak.

    It's not just "long context" - you demand "infinite context" and "any length" now. Even humans don't have that. "No tools" is no longer enough - what, do you demand "no prompts" now too? Having LLMs decompose tasks and prompt each other the way humans do is suddenly a no-no?

    • I’m not demanding anything, I’m pointing out that performance tends to degrade as context scales, which follows from current LLM architectures as autoregressive models.

      In that sense, Yann was right.

      1 reply →