← Back to context

Comment by Mawr

2 years ago

> Are humans limited to low-risk applications like that?

Yes, of course. That's why the systems the parent mentioned designed humans out of the safety-critical loop.

> Because humans, even some of the most humble, will still assert things they THINK are true, but are patently untrue, based on misunderstandings, faulty memories, confused reasoning, and a plethora of others.

> I can't count the number of times I've had conversations with extremely well-experience, smart techies who just spout off the most ignorant stuff.

The key difference is that when the human you're having a conversation with states something, you're able to ascertain the likelihood of it being true based on available context: How well do you know them? How knowledgeable are they about the subject matter? Does their body language indicate uncertainty? Have they historically been a reliable source of information?

No such introspection is possible with LLMs. Any part of anything they say could be wrong and to any degree!