Comment by InkCanon
4 hours ago
The field is massively hampered by the wishful mnemonics and anthropomorphization of LLMs. For example, even the hallucination idea arbitrarily assigns human semantics to LLM results. By the actual mathematical principles by which LLMs work, any hallucination is another output, with no clear definition between it and every other output.
No comments yet
Contribute on Hacker News ↗