Comment by nozzlegear
3 days ago
IMO it's because people have learned not to trust LLMs. It's like using AI code generators – they're a useful tool if you know what you're doing, but you need to review the material it produces and verify that it works (in this case, verify that what it says is correct). When they're used as a source in conversations, we never know if the "dev" has "reviewed the code," so to speak, or just copy and pasted.
As for why people don't like LLMs being wrong versus a human being wrong, I think it's twofold:
1. LLMs have a nasty penchant for sounding overly confident and "bullshitting" their way to an answer in a way that most humans don't. Where we'd say "I'm not sure," an LLM will say "It's obviously this."
2. This is speculation, but at least when a human is wrong you can say "hey you're wrong because of [fact]," and they'll usually learn from that. We can't do that with an LLM because they don't learn (in the way humans do), and in this situation they're a degree removed from the conversation anyway.
No comments yet
Contribute on Hacker News ↗