Comment by _heimdall
2 days ago
The term "hallucinations" has always frustrated me. The marketing there makes sense, but an LLM that hallucinates is an LLM doing exactly what it was designed for -predicting what a human might say in response.
Facts don't really play a part there, if a response is factual its only a sign that the training set largely agreed on the facts (meaning the correlation of token sequence was high).
No comments yet
Contribute on Hacker News ↗