Comment by MSM

8 days ago

3 years ago people understood LLMs hallucinated and shouldn't be trusted with important tasks.

Somehow in the 3 years since then the mindset has shifted to "well it works well enough for X, Y, and Z, maybe I'll talk to gpt about my mental health." Which, to me, makes that article much more timely than if it had been released 3 years ago.

I disagree with your premise that 3 years ago “people” knew about hallucinations or that these models shouldn’t be trusted.

I would argue that today most people do not understand that and actually trust LLM output more on face value.

Unless maybe you mean people = software engineers who at least dabble in some AI research/learnings on the side