Comment by Sohcahtoa82

6 days ago

> The ”reasoning” it outputs does not have a causal relationship with the end result.

It absolutely does.

Now, we can argue all about whether it's truly "reasoning", but I've certainly seen cases where if you ask it a question but say "Give just the answer", it'll consistently give a wrong answer, whereas if you let it explain its thought process before giving a final answer, it'll consistently get it right.

LLMs are at their core just next-token guessing machines. By allowing them to output extra "reasoning" tokens, it can prime the context to give better answers.

Think of it like solving an algebraic equation. Humans can't typically solve any but the most trivial equations in a single step, and neither can an LLM. But like a human, an LLM can solve one if it takes it one step at a time.

Multiple studies have show there is no causal relationship there, and the reasoning traces can be complete bull even if the result is correct.