← Back to context

Comment by audunw

2 days ago

The one big thing missing from LLMs is the ability to express how confident it is in the truth of what it’s saying.

Perhaps this could be a step in that direction. If we can associate the attribution with likelihood of being true. E.g., Arxiv would be better than science fiction in that context. But what is the attribution if it hallucinates a citation? Im guessing it would still be attributing it to scientific sources. So it does nothing to fix the most damaging instances of hallucination?