Comment by travisgriggs
1 day ago
I have this same reaction.
But I also have to honestly ask myself “aren’t humans also prone to make stuff up” when they feel they need to have an answer, but don’t really?
And yet despite admitting that humans hallucinate and make failures too, I remain uncomfortable with ultimate trust in LLMs.
Perhaps, while LLMs simulate authority well, there is an uncanny valley effect in trusting them, because some of the other aspect of interacting with an authority person are “off”.
No comments yet
Contribute on Hacker News ↗