← Back to context

Comment by dcre

2 days ago

This is not a good analogy because reasoning models are not choosing the best from a set of attempts based on knowledge of the correct answer. It really is more like what it sounds like: “did you think about it longer until you ruled out various doubts and became more confident?” Of course nobody knows quite why directing more computation in this way makes them better, and nobody seems to take the reasoning trace too seriously as a record of what is happening. But it is clear that it works!

> Of course nobody knows quite why directing more computation in this way makes them better, and nobody seems to take the reasoning trace too seriously as a record of what is happening. But it is clear that it works!

One thing it's hard to wrap my head around is that we are giving more and more trust to something we don't understand with the assumption (often unchecked) that it just works. Basically your refrain is used to justify all sorts of odd setup of AIs, agents, etc.

  • Trusting things to work based on practical experience and without formal verification is the norm rather than the exception. In formal contexts like software development people have the means to evaluate and use good judgment.

    I am much more worried about the problem where LLMs are actively misleading low-info users into thinking they’re people, especially children and old people.

Bad news: it doesn't seem to work as well as you might think: https://arxiv.org/pdf/2508.01191

As one might expect, because the AI isn't actually thinking, it's just spending more tokens on the problem. This sometimes leads to the desired outcome but the phenomenon is very brittle and disappears when the AI is pushed outside the bounds of its training.

To quote their discussion, "CoT is not a mechanism for genuine logical inference but rather a sophisticated form of structured pattern matching, fundamentally bounded by the data distribution seen during training. When pushed even slightly beyond this distribution, its performance degrades significantly, exposing the superficial nature of the “reasoning” it produces."

  • I keep wondering whether people have actually examined how this work draws its conclusions before citing it.

    This is science at its worst, where you start at an inflammatory conclusion and work backwards. There is nothing particularly novel presented here, especially not in the mathematics; obviously performance will degrade on out-of-distribution tasks (and will do so for humans under the same formulation), but the real question is how out-of-distribution a lot of tasks actually are if they can still be solved with CoT. Yes, if you restrict the dataset, then it will perform poorly. But humans already have a pretty large visual dataset to pull from, so what are we comparing to here? How do tiny language models trained on small amounts of data demonstrate fundamental limitations?

    I'm eager to see more works showing the limitations of LLM reasoning, both at small and large scale, but this ain't it. Others have already supplied similar critiques, so let's please stop sharing this one around without the grain of salt.

    • "This is science at its worst, where you start at an inflammatory conclusion and work backwards"

      Science starts with a guess and you run experiments to test.

      5 replies →

  • A viable consideration is that the models will hone in on and reinforce an incorrect answer - a natural side effect of the LLM technology wanting to push certain answers higher in probability and repeat anything in context.

    Regardless of being in conversation or thinking context this doesn't prevent the model from speaking the wrong answer so the paper on the illusion of thinking makes sense.

    What actually seems to be happening is a form of conversational prompting. Of course with the right conversation back and forth with an LLM you can inject knowledge in a way that causes the natural distribution to shift (again - side effect of the LLM tech.) but by itself it won't naturally get the answer perfect every time.

    If this extended thinking were actually working you would expect the LLM to be able to logically conclude an answer with very high accuracy 100% of the time which it does not.

  • They experimented with gpt-2 scale models. Hard to make any meaningful conclusions in the gpt-5 era.

  • The other commenter is more articulate, but you simply cannot draw the conclusion from this paper that reasoning models don't work well. They trained tiny little models and showed they don't work. Big surprise! Meanwhile every other piece of evidence available shows that reasoning models are more reliable at sophisticated problems. Just a few examples.

    - https://arcprize.org/leaderboard

    - https://aider.chat/docs/leaderboards/

    - https://arstechnica.com/ai/2025/07/google-deepmind-earns-gol...

    Surely the IMO problems weren't "within the bounds" of Gemini's training data.

    • The Gemini IMO result used a specifically fine tuned model for math.

      Certainly they weren't training on the unreleased problems. Defining out of distribution gets tricky.

      2 replies →

  • This is not the slam dunk you think it is. Thinking longer genuinely provides better accuracy. Sure there are decreasing returns to increasing thinking tokens.

    GPT 5 fast gets many things wrong but switching to the thinking model fixes the issues very often.