Comment by smusamashah
1 year ago
A human WILL NOT make up non-existent facts, URLs, libraries and all other things. Unless they deliberately want to deceive.
They can make mistake in understanding something and will be able to explain those mistakes in most cases.
LLM and Human mistakes ARE NOT same.
> A human WILL NOT make up non-existent facts
Categorically not true and there’s so many examples of this in every day practice that I can’t help but feel you’re saying this to disprove your own statement.
It's absolutely true. Humans misremember details but I'll ask an LLM what function I use to do something and it'll literally tell me a function exists named DoExactlyWhatIWant and humans never do that unless they're liars. And I don't go to liars for information -- same reason I don't trust LLMs.
Tell me, does an LLM know when it lies?
I don't see a functional difference between misremembering details and lying. Sure one is innocent and one malicious, but functionally a compiler doesn't care if the source of your error is evil or not.
An LLM doesn't know when it lies, but a human also doesn't know when they are (innocently) wrong.
1 reply →
Non-liar humans do in fact make mistakes.
True, but if they start producing non-existent facts consistently LIKE LLMS. You will be concerned about their mental health.
I insist that human hallucinations are NOT SAME as LLMs.