Comment by akdev1l
8 days ago
I disagree with your premise that 3 years ago “people” knew about hallucinations or that these models shouldn’t be trusted.
I would argue that today most people do not understand that and actually trust LLM output more on face value.
Unless maybe you mean people = software engineers who at least dabble in some AI research/learnings on the side
No comments yet
Contribute on Hacker News ↗