← Back to context

Comment by Closi

2 years ago

Not always, I think that is an unfair reflection of LLM's in their current state. See two trivial examples below:

https://chat.openai.com/share/ca733a4a-7cdb-4515-abd0-0444a4...

https://chat.openai.com/share/dced0cb7-b6c3-4c85-bc16-cdbf22...

Hallucinations are definitely a problem, but they are certainly less than they used to be - They will often say that they aren't sure but can speculate, or "it might be because..." etc.

I get the feeling that LLMs will tell you they don’t know if “I don’t know” is one of the responses in their training data set. If they actually don’t know, i.e. no trained responses, that’s when they start hallucinating.