Comment by allenu
11 hours ago
> Would this have worked just as well if a person was the one doing this?
I'm not sure how you want to quantify "just as well" considering the AI has boundless energy and is generally designed to be agreeable to whatever the user says. But it's definitely happened that someone was chatted into suicide. Just look up the story of Michelle Carter who texted her boyfriend and urged him to commit suicide, which he eventually did.
This is interesting because the LLM provides enough of an illusion of human interaction that people are lowering their guards when interacting with it. I think it's a legitimate blind spot. As humans, our default when interacting with other humans, especially those that are agreeable and friendly to us, is to trust them, and it works relatively well, unless you're interacting with a sociopath or, in this case, a machine.
No comments yet
Contribute on Hacker News ↗