Comment by fn-mote
6 months ago
> I am not a fan of this kind of communication. It doesn't know so try to deflect the short coming it onto the user.
This is a very human-like response when asked a question that you think you know the answer to, but don't want to accuse the asker of having an incorrect premise. State what you think, then leave the door open to being wrong.
Whether or not you want this kind of communication from a machine, I'm less sure... but really, what's the issue?
The problem of the incorrect premise happens all of the time. Assuming the person asking the question is correct 100% of the time isn't wise.
Humans use the phrase "I don't know.".
AI never does.
>I'm not aware of any MS-DOS productivity program...
>I don't know of any MS-DOS productivity programs...
I dunno, seems pretty similar to me.
And in a totally unreltaed query today, I got the following response:
>That's a great question, but I don't have current information...
Sounds a lot like "I don't know".
>> And in a totally unreltaed query today, I got the following response:
>That's a great question,
Found the LLM who's training corpus includes transcripts of every motivational speaker and TED talk Q&A ever...
1 reply →
Because there is no "I don't know" in the training data. Can you imagine a forum where in the response for a question of some obscure easter egg there are hunddeds of "I don't know"?
You gave one explanation, but the problem remains.