Comment by otabdeveloper4
2 days ago
> but now I see its reasoning
It's not showing its reasoning. "Reasoning" models are trained to output more tokens in the hope that more tokens means less hallucinations.
It's just a marketing trick and there is no evidence this sort of fake ""reasoning"" actually gives any benefit.
No comments yet
Contribute on Hacker News ↗