Comment by batshit_beaver
2 hours ago
Examples:
https://arxiv.org/html/2506.02878v1
https://arxiv.org/pdf/2508.01191
Anthropic themselves: https://www.anthropic.com/research/reasoning-models-dont-say...
They were approaching this from an interpretability standpoint, but the more interesting finding in there is that models come up with an answer that fits their training and context provided. CoT is generated to fit the anticipated answer.
In these studies, there are examples of CoT that directly contradicts the response these models ultimately settle on.
This is not reasoning. This is pretense.
No comments yet
Contribute on Hacker News ↗