← Back to context

Comment by noboostforyou

9 hours ago

If you're being 100% literal, sure. But language evolves and it's the accepted term for the concept. OpenAI themselves uses the phrase - https://openai.com/index/why-language-models-hallucinate/

OpenAI are the last people who I would take as a reference, because they are financially motivated to keep the charade of a "thinking" LLM or so called "AI". That's why they are widely using anthropomorphic terms like "hallucination" or "reasoning" or "thinking", while their computer programs can't do neither of those things. LLM companies sometimes even expose their hypocrisy. My favorite example for now is when Antropic showed in their own paper that asking LLM how it "reasoned" through calculating a sum of numbers doesn't match reality at all, it's all generated slop.

This is why it is important that users (us) don't fall into the anthropomorphism trap and call programs what they are are and what they really do. Especially important since general populace seems to be deluded by the OpenAI and Anthropic aggressive lies and believe that LLMs can think.