Comment by jph00
2 years ago
> We do not normally hallucinate. We are sometimes wrong, and sometimes are wrong about the confidence they should attach to their knowledge. But we do not simply hallucinate and spout fully confidence nonsense constantly. That is what LLMs.
In my average interaction with GPT 4 there are far less errors than in this paragraph. I would say that here you in fact "spout fully confidence nonsense" (sic).
Some humans are better than others at saying things that are correct, and at saying things with appropriately calibrated confidence. Some LLMs are better than some humans in some situations at doing these things.
You seem to be hung up on the word "hallucinate". It is, indeed, not a great word and many researchers are a bit annoyed that's the term that's stuck. It simply means for an LLM to state something that's incorrect as if it's true.
The times that LLMs do this do stand out, because "You remember a few isolated incidents because they're salient".
> Some humans are better than others at saying things that are correct, and at saying things with appropriately calibrated confidence.
That's true - which is why we have constructed a society with endless selection processes. Starting from kindergarten, we are constantly assessing people's abilities - so that by the time someone is interviewing for a safety critical job they've been through a huge number of gates.