Comment by ACCount37
3 days ago
There appears to be a degree of "introspection of its own confidence" in modern LLMs. They can identify their own hallucinations, at a rate significantly better than chance. So there must be some sort of "do I recall this?" mechanism built into them. Even if it's not exactly a reliable mechanism.
Anthropic has discovered that this is definitely the case for name recognition, and I suspect that names aren't the only things subject to a process like that.
No comments yet
Contribute on Hacker News ↗