← Back to context

Comment by antonvs

1 day ago

Not all human hallucinations are lies, though. I really think you’re not fully thinking this through. People have beliefs because of, essentially, their training data.

A good example of this is religious belief. All the evidence suggests that religious belief is essentially 100% hallucination. It may be a little different from the nature of LLM hallucinations, but in terms of quality or quantity regarding reliability of what these entities say, I don’t see much difference. Although I will say, LLMs are better at acknowledging errors than humans tend to be, although that may largely be due to training to be sycophantic.

The bottom line, though, is I don’t agree that humans are less subject to hallucinations than LLMs are. As long as a significant number of humans rabbit on about “higher powers”, afterlives, “angels”, “destiny”, etc., that’s a ridiculously difficult position to defend.

> It may be a little different from the nature of LLM hallucinations, but in terms of quality or quantity regarding reliability of what these entities say, I don’t see much difference.

I see tons of differences.

Many religious beliefs origins have to do with explaining how and why the world functions they way it does; many gods were created in many religions to explain natural forces of the world, or mechanisms of society, in the form of a story which is the natural way human brains have evolved to store large amounts of information.

Further into the modern world, religions persist for a variety of reasons, specifically acquisition of wealth/power, the ability to exert social control on populations with minimal resistance, and cultural inertia. But all of those "hallucinations" can be explained; we know most of their histories and origins and what we don't know can be pretty reliably guessed based on what we do know.

So when you say:

> Not all human hallucinations are lies, though. ... People have [hallucinations] because of, essentially, their training data.

You're correct, but even using the word hallucinations itself is giving away some of the game to AI marketers.

A "hallucination" is typically some type of auditory or visual stimulus that is present in a mind, for a whole mess of reasons, that does not align with the world that mind is observing, and in the vast majority of cases, said hallucination is a byproduct of a mind's "reasoning machine" trying to make sense of nonsensical sensory input.

This requires a basis for this mind perceiving the universe, even in error, and judging incorrectly based on that, and LLMs do not fit this description at all. They do not perceive in any way, even machine learning applications of advanced varieties are not using sensors to truly "sense" they are merely paging through input data and referencing existing data to pattern match it. If you show an ML program 6,000 images of scooters, it will be able to identify a scooter pretty well. But if you show it then a bike, a motorcycle, a moped and a Segway, it will not understand that any of these things accomplish a similar goal, because even though it knows (kind of) what a scooter looks like, it has no idea what it is for or why someone would want one, and that all those other items would probably serve a similar purpose.

> The bottom line, though, is I don’t agree that humans are less subject to hallucinations than LLMs are.

That's still not what I said. I said an LLM's lies, however unintentional, are harder to detect than a person's lies because a person lies for a reason, even a stupid reason. An LLM lies because it doesn't understand anything it's actually saying.