← Back to context

Comment by otabdeveloper4

2 months ago

I don't either, but chain of thought is obviously bullshit and just more LLM hallucination.

LLMs will routinely "reason" through a solution and then proceed to give out a final answer that is completely unrelated to the preceding "reasoning".

It's more hallucination in the sense that all LLM output is hallucination. CoT is not "what the llm is thinking". I think of it as just creating more context/prompt for itself on the fly, so that when it comes up with a final response it has all that reasoning in its context window.

  • Exactly, whether or not it’s the “actual thought” of the model, it does influence its final output, so it matters to the user.

    • > it does influence its final output

      We don't really know that. So far CoT is only used to sell LLMs to the user. (Both figuratively as a neat trick and literally as a way to increase token count.)

      1 reply →