← Back to context

Comment by leprechaun1066

11 hours ago

Another way to look at it is everything a LLM creates is a 'hallucination', some of these 'hallucinations' are more useful than others.

I do agree with the parent post. Calling them hallucinations is not an accurate way of describing what is happening and using such terms to personify these machines is a mistake.

This isn't to say the outputs aren't useful, we see that they can be very useful...when used well.