← Back to context

Comment by calf

2 years ago

Ever since the "stochastic parrots" and "super-autocomplete" criticisms of LLMs, the question is whether hallucinations are solvable in principle at all. And if hallucinations are solvable, it would of such basic and fundamental scientific importance that I think would be another mini-breakthrough in AI.

An interesting perspective on this I’ve heard discussed is whether hallucinations ought to be solved at all, or whether they are core to the way human intelligence works as well, in the sense that that is what is needed to produce narratives.

I believe it is Hinton that prefers “confabulation” to “hallucination” because it’s more accurate. The example in the discussion about hallucination/confabulation was that of someone who had been present in the room during Nixon’s Watergate conversations. Interviewed about what he heard, he provided a narrative that got many facts wrong (who said what, and what exactly was said). Later, when audio tapes surfaced, the inaccuracies in his testimony became known. However, he had “confabulated truthfully”. That is, he had made up a narrative that fit his recall as best as he was able, and the gist of it was true.

Without the ability to confabulate, he would have been unable to tell his story.

(Incidentally, because I did not check the facts of what I just recounted, I just did the same thing…)

  • > Without the ability to confabulate, he would have been unable to tell his story.

    You can tell a story without making up fiction. Just say you don’t know when you don’t know.

    Inaccurate information is worse than no information.

    • > You can tell a story without making up fiction. Just say you don’t know when you don’t know.

      The point is that humans can't in general, because we don't actually know which parts of what we "remember" are real and which parts are our brain filling in the blanks. And maybe it's the same for nonhuman intelligences too.

      3 replies →

    • If “confabulation” is necessary, you can use confabulation for the use cases where it’s needed and turn it off for the use cases where you need to return actual “correct” information.

  • I've read similar thoughts before about AI art. When the process was still developing, you would see AI "artwork" that was the most inhumanly uncanny pictures. Things that twisted the physical forms that human artists perceive with the fundamental pixel format/denoising algorithms that the AI works with. It was just uniquely AI and not something a human being would be able to replicate. "There are no errors just happy accidents." In there you say there was a real art medium/genre with its own intrinsic worth.

    After a few months AI developers refined the process to just replicate images so they looked like a human being made them, in effect killing what was the real AI art.