Comment by nerdjon
9 hours ago
> I still dislike the term "hallucinations". It comes across like the model did something wrong. It did not, as factually wrong outputs happen per design.
While I do see the issue with the word hallucination providing a humanization to the models, I have yet to come up or see a word that so well explains the problem to non technical people. And quite frankly those are the people that need to understand that this problem still very much exists and is likely never going away.
Technically yeah the model is doing exactly what it is supposed to do and you could argue that all of its output is "hallucination". But for most people the idea of a hallucinated answer is easy enough to understand without diving into how the systems work, and just confusing them more.
> And quite frankly those are the people that need to understand that this problem still very much exists and is likely never going away.
Calling it a hallucination leads people to think that they just need to stop it from hallucinating.
In layman's terms, it'd be better to understand that LLMs are schizophrenic. Even though that's not really accurate either.
A better way to get across that the models really only understand reality by the way they've read about it and then we ask them for answers "in their own words" but that's a lot longer than "hallucination".
It's like the gag in the 40 year old version where he describes breasts feeling like bags of sand.