← Back to context

Comment by otabdeveloper4

9 days ago

> reasoning models create text that looks like reasoning, which helps solve problems, but isn’t always a faithful description of how the model actually got to the answer

Correct. Just more generated bullshit on top of the already generated bullshit.

I wish the bubble would pop already and they make an LLM that would return straight up references to the training set instead of the anthropomorphic conversation-like format.