Comment by a-dub
16 hours ago
> I still dislike the term "hallucinations". It comes across like the model did something wrong. It did not, as factually wrong outputs happen per design.
can you hear yourself? you are providing excuses for a computer system that produces erroneous output.
No he does not.
He is not saying it's ok for this system to provides wrong answers, he is saying it's normal for informations from LLM to not be reliable and thus the issue is not coming from the LLM, but from the way it is being used.
We are in the late stage of the hype cycle for LLMs where the comments are becoming progressively ridiculous like for cryptocoins before the market crashed. The other day a user posted that LLMs are the new transistors or electricity.