← Back to context

Comment by lurk2

8 days ago

This is 40 screenshots of a writer at the New Yorker finding out that LLMs hallucinate, almost 3 years after GPT 2.0 was released. I’ve always held journalists in a low regard but how can one work in this field and only just now be finding out about the limitations to this technology?

3 years ago people understood LLMs hallucinated and shouldn't be trusted with important tasks.

Somehow in the 3 years since then the mindset has shifted to "well it works well enough for X, Y, and Z, maybe I'll talk to gpt about my mental health." Which, to me, makes that article much more timely than if it had been released 3 years ago.

  • I disagree with your premise that 3 years ago “people” knew about hallucinations or that these models shouldn’t be trusted.

    I would argue that today most people do not understand that and actually trust LLM output more on face value.

    Unless maybe you mean people = software engineers who at least dabble in some AI research/learnings on the side

She's a writer submitting original short pieces to the New Yorker in hopes of being published, by no stretch a "journalist" let alone "at the New Yorker". I've always held judgmental HN commenters in low regard but how can one take the time to count the screenshots without picking up on the basic narrative context?

  • > She's a writer submitting original short pieces to the New Yorker in hopes of being published, by no stretch a "journalist" let alone "at the New Yorker".

    Her substack bio reads: Writer/Photographer/Editor/New Yorker. Is the ordinary interpretation of that not: “I am a writer / photographer / editor at the New Yorker”?