Comment by krapp

2 years ago

It's also a wrongful anthropomorphism to claim that human beings "make the exact same mistakes" as LLMs, because they don't. Humans don't confabulate the way LLMs do unless they have a severe mental illness. A human doctor isn't as likely to simply make up diseases, or symptoms, or medications, whereas an LLM will do so routinely, because they don't understand anything like human anatomy, disease, chemistry or medicine, only the stochastic matching of text tokens.

We're not unnecessarily harsh on hallucinations, it's absolutely necessary because of how effective LLMs are at convincing people that because they can generate language, they are capable of sentient thought, self-awareness and reason. Acting as if humans and LLMs are basically equally trustworthy, or worse, that LLMs are more trustworthy, is dangerous. If we accept this as axiomatic, shit will break and people will die.

I hear what you’re saying. Yes of course we should aim to make our LLM’s as trustworthy as possible. I think my argument was more philosophical than practical. I meant that directing real anger seems misguided; after all humans lie with intent/real negligence, LLM’s are random number generators. And ‘don’t believe everything everything you read on the internet’ is something that persists regardless of AI generated content; we shouldn’t expect to lower our guard any time soon. But yes, I agree strongly with how the danger arises because people DON’T treat LLM’s the same and get lulled into a false sense of trust.