Comment by Terr_
2 months ago
> There's no real reasoning. It seems that reasoning is just a feedback loop on top of existing autocompletion.
I like to say that if regular LLM "chats" are actually movie scripts being incrementally built and selectively acted-out, then "reasoning" models are a stereotypical film noir twist, where the protagonist-detective narrates hidden things to himself.
No comments yet
Contribute on Hacker News ↗