Comment by fnordpiglet
3 hours ago
I think sometimes though there harness LLMs providing guidance. For instance I’ve seen recently coding agents doing an analysis then mid response saying “no wait, that’s not right” and course correcting. This feels implausible as an auto regressive rhetorical tick. LLM harnesses are widely used in advanced agentic systems and I’m sure the Pro level reasoning models exploit them extensively. I’m not saying this is what happened here, but there is a chance it was something injected by the hardness into its thinking.
No comments yet
Contribute on Hacker News ↗