← Back to context

Comment by cluckindan

6 days ago

It is not thinking. It is trying to deceive you. The ”reasoning” it outputs does not have a causal relationship with the end result.

> The ”reasoning” it outputs does not have a causal relationship with the end result.

It absolutely does.

Now, we can argue all about whether it's truly "reasoning", but I've certainly seen cases where if you ask it a question but say "Give just the answer", it'll consistently give a wrong answer, whereas if you let it explain its thought process before giving a final answer, it'll consistently get it right.

LLMs are at their core just next-token guessing machines. By allowing them to output extra "reasoning" tokens, it can prime the context to give better answers.

Think of it like solving an algebraic equation. Humans can't typically solve any but the most trivial equations in a single step, and neither can an LLM. But like a human, an LLM can solve one if it takes it one step at a time.

  • Multiple studies have show there is no causal relationship there, and the reasoning traces can be complete bull even if the result is correct.

The longer "it" reasons, the more attention sinks are used to come to a "better" final output.

  • I’ve looked up attention sinks and can’t figure out how you’re using the term here. It sounds interesting, would you care to elaborate?