Comment by epidemian
6 days ago
> If you define a hallucination as something that wasn't in the training data, directly or indirectly (indirectly being something like an "obvious" abstract concept), then [...]
Ok, sure. But why would you choose to define hallucinations in a way that is contrary to common sense and the normal understanding of what an AI hallucination is?
The common definition of hallucinations is basically: when AI makes shit up and presents it as fact. (And the more technical definition also basically aligns with that.)
No one would say that if the AI takes the data you provide in the prompt and can deduce a correct answer for that specific data —something that is not directly or indirectly present in its training data— it would be hallucinating. In fact that would be an expected thing for an intelligent system to do.
It seems to me you're trying to discuss with something nobody said. You're making it seem that saying "it's bad that LLMs can invent wrong/misleading information like this and present it as fact, and that the companies that deploy them don't seem to care" is equivalent to "i want LLMs to be perfect and have no bugs whatsoever", and then discuss about how ridiculous is to state the latter.
I intentionally referenced their comment to make it clear, to you, that perfection is their twice stated requirement, even with a charitable interpretation. Here they are again:
> in a sane world the folks that run ChatGPT would be falling over themselves to be sure it didn’t happen again
> If “don’t hallucinate” is too much to ask then ethics flew out the window long ago.
Neither of those ("didn't happen again" and "don't hallucinate") are logically ambiguous or flexible. I can only respond to what they wrote.