Comment by Zetaphor
7 days ago
Hey OP, I'm curious about the accuracy of this quote:
> When the tool is gone, the model cannot “hallucinate” a diagnosis because it lacks the “form” to reason and write it on.
What's to stop the model from just hallucinating an entire tool call, or result of the tool call? If there's no tool available it could just make up the result of one, and then further context would treat that as a source of truth. Maybe if you threw an explicit error message, but that still feels like it would be prone to hallucination.
No comments yet
Contribute on Hacker News ↗