Comment by DonHopkins

5 days ago

[flagged]

I think you're missing their point. The question you're replying to is, how do we know that this made up content is a hallucination. Ie., as opposed to being made up by a human. I think it's fairly obvious via Occam's Razor, but still, they're not claiming the quotes could be legit.

  • [dead]

    • You seem to be quite certain that I had not read the article, yet I distinctly remember doing do.

      By what proceess do you imagine I arrived at the conclusion that the article suggested that published quotes were LLM hallucinations when that was not mentioned in the article title?

      You accuse me of performative skepticism, yet all I think is that it is better to have evidence over assumptions, and it is better to ask if that evidence exists.

      It seems a much better approach than making false accusations based upon your own vibes, I don't think Scott Shambaugh went to that level though.

      1 reply →

There is a third option: The journalist who wrote the article made the quotes up without an LLM.

I think calling the incorrect output of an LLM a “hallucination” is too kind on the companies creating these models even if it’s technically accurate. “Being lied to” would be more accurate as a description for how the end user feels.

  • The journalist was almost certainly using an LLM, and a cheap one at that. The quote reads as if the model was instructed to build a quote solely using its context window.

    Lying is deliberately deceiving, but yeah, to a reader, who in a effect is a trusting customer who pays with part of their attention diverted to advertising support, broadcasting a hallucination is essentially the same thing.