Comment by ToucanLoucan

1 day ago

> How is this different from humans?

Humans will generally not do this because being made to look stupid (aka social pressure) incentivizes not doing it. That doesn't mean humans never lie or are wrong of course, but I don't know about you, I don't make shit up nearly to the degree an LLM does. If I don't know something I just say that.

> To make this claim you need a theory of consciousness that essentially denies materialism.

I did not say "a machine would never be conscious," I said "an LLM will never be conscious" and I fully stand by that. I think machine intelligence is absolutely something that can be made, I just don't think ChatGPT will ever be that.

> I don't know about you, I don't make shit up nearly to the degree an LLM does. If I don't know something I just say that.

We're a sample of two, though. Look around you, read the news, etc. Humans make a lot of shit up. When you're dealing with other people, this is something you have to watch out for if you don't want to be misled, manipulated, conned, etc.

(As an aside, I haven't found hallucination to be much of an issue in coding and software design tasks, which is what I use LLMs for daily. I think focusing on their hallucinations involves a bit of confirmation bias.)

> I did not say "a machine would never be conscious," I said "an LLM will never be conscious" and I fully stand by that.

Ah ok. Yes, I agree that seems likely, although I think it's not really possible to make definitive statements about this sort of thing, since we don't have any robust theories of consciousness at the moment.

  • The difference between hallucination and lie is important though: a hallucination is a lie with no motivation, which can make it significantly harder to detect.

    If you went to a hardware store and asked for a spark plug socket without knowing the size, and a customer service person recommended an imperial set of three even though your vehicle is metric, that would be akin to an LLM's hallucination: it didn't happen for any particular reason, it just filled in information where none was available. An actual person, even one not terribly committed to their job, would ask what size or failing that, what year of car.

    • Not all human hallucinations are lies, though. I really think you’re not fully thinking this through. People have beliefs because of, essentially, their training data.

      A good example of this is religious belief. All the evidence suggests that religious belief is essentially 100% hallucination. It may be a little different from the nature of LLM hallucinations, but in terms of quality or quantity regarding reliability of what these entities say, I don’t see much difference. Although I will say, LLMs are better at acknowledging errors than humans tend to be, although that may largely be due to training to be sycophantic.

      The bottom line, though, is I don’t agree that humans are less subject to hallucinations than LLMs are. As long as a significant number of humans rabbit on about “higher powers”, afterlives, “angels”, “destiny”, etc., that’s a ridiculously difficult position to defend.