Comment by elicksaur
12 hours ago
LLMs don’t actually “hallucinate”. Hallucination would be the LLM starting from a different context than was actually given to it. Given the fidelity of electronic transmission these days, that’s probably never an issue.
LLMs also have no grounding in abstract concepts such as true and false. This means they output things stochastically rather than logically. Sometimes people are illogical, but people are also durable, learn over time, and learn extremely quickly. Current LLMs only learn once, so they easily get stuck in loops and pitfalls when they produce output that makes no sense to the human reader. The LLM can’t “understand” that the output makes no sense because it doesn’t “understand” anything in the sense that humans understand things.
No comments yet
Contribute on Hacker News ↗