← Back to context

Comment by TerrifiedMouse

2 years ago

> In the end the quality of the information it will get back to you is no better than the quality of a thorough google search.. it will just get you a more concise and well-formatted answer faster.

I would say it’s worse than Google search. Google tells you when it can’t find what you are looking for. LLMs “guess” a bullshit answer.

Not always, I think that is an unfair reflection of LLM's in their current state. See two trivial examples below:

https://chat.openai.com/share/ca733a4a-7cdb-4515-abd0-0444a4...

https://chat.openai.com/share/dced0cb7-b6c3-4c85-bc16-cdbf22...

Hallucinations are definitely a problem, but they are certainly less than they used to be - They will often say that they aren't sure but can speculate, or "it might be because..." etc.

  • I get the feeling that LLMs will tell you they don’t know if “I don’t know” is one of the responses in their training data set. If they actually don’t know, i.e. no trained responses, that’s when they start hallucinating.