Comment by lostmsu
1 year ago
I don't think that "hallucination problem" is a problem at all worth addressing separately from just building bigger/better models that do the same thing. Because 1) it is present in humans, 2) it is clear bigger models have less of it than smaller models. If at scale nothing changes LLMs will eventually just hallucinate less than humans.
No comments yet
Contribute on Hacker News ↗