Comment by Lerc

5 days ago

Has it been shown or admitted that the quotes were hallucinations, or is it the presumption that all made up content is a hallucination now?

Another red flag is that the article used repetitive phrases in an AI-like way:

"...it illustrates exactly the kind of unsupervised output that makes open source maintainers wary."

followed later on by

"[It] illustrates exactly the kind of unsupervised behavior that makes open source maintainers wary of AI contributions in the first place."

  • I used to be skeptical that AI generated text could be reliably detected, but after a couple years of reading it, there are cracks starting to form in that skepticism.

Gen AI only produces hallucinations (confabulations).

The utility is that the infrenced output tends to be right much more often than wrong for mainstream knowledge.

You could read the original blog post...

  • How could that prove hallucinations? It could only possibly prove that they are not. If the quotes are in the original post then they are not hallucinations. If they are not in the post they could be caused by something is not a LLM.

    Misquotes and fabricated quotes have existed long before AI, And indeed, long before computers.

[flagged]

  • I think you're missing their point. The question you're replying to is, how do we know that this made up content is a hallucination. Ie., as opposed to being made up by a human. I think it's fairly obvious via Occam's Razor, but still, they're not claiming the quotes could be legit.

  • There is a third option: The journalist who wrote the article made the quotes up without an LLM.

    I think calling the incorrect output of an LLM a “hallucination” is too kind on the companies creating these models even if it’s technically accurate. “Being lied to” would be more accurate as a description for how the end user feels.

    • The journalist was almost certainly using an LLM, and a cheap one at that. The quote reads as if the model was instructed to build a quote solely using its context window.

      Lying is deliberately deceiving, but yeah, to a reader, who in a effect is a trusting customer who pays with part of their attention diverted to advertising support, broadcasting a hallucination is essentially the same thing.