Comment by PlatoIsADisease
1 month ago
Wow that link was absurdly bad.
Reading that makes me unbelievably happy I played with GPT3 and learned how/when LLMs fail.
Telling it not to hallucinate is a serious misunderstanding of LLMs. At most in 2026, you are telling thinking/COT to double check.
No comments yet
Contribute on Hacker News ↗