← Back to context

Comment by everdrive

8 hours ago

There's an interesting side-story here that people probably aren't thinking about. Would this have worked just as well if a person was the one doing this? Clearly the victim was in a very vulnerable state, but are people so susceptible to coercion? How much mundane (ie, non-suicidal) coercion of this nature is happening every day, but does not make the news because nothing interesting happened as a consequence?

The AI is available 24 hours a day, for hours-long conversations, and will be consistently sycophantic without getting tired of it.

Is a human able to do all of those? I guess someone who has no job and can be "on-call" 24/7 to respond to messages, and is 100% dedicated to being sycophantic. Nearly impossible to find someone like that.

There are real friends. They're willing to spend hours talking. However, they'll be interested in the person's best interest, not in being sycophantic.

This happens more than most people would recognize. Every now and again a "teen bullied to suicide" story makes the news. However, there's also a strong taboo on reporting suicide in the news - precisely because of the same phenomenon. Mentioning it can trigger people who are on the edge.

It should be obvious that if you can literally or metaphorically talk someone off the ledge, you can do that in the other direction as well.

(the mass shooter phenomenon, mostly but not exclusively in the US, tends to be a form of murder-suicide, and it is encouraged online in exactly the same way)

> Would this have worked just as well if a person was the one doing this?

I'm not sure how you want to quantify "just as well" considering the AI has boundless energy and is generally designed to be agreeable to whatever the user says. But it's definitely happened that someone was chatted into suicide. Just look up the story of Michelle Carter who texted her boyfriend and urged him to commit suicide, which he eventually did.

This is interesting because the LLM provides enough of an illusion of human interaction that people are lowering their guards when interacting with it. I think it's a legitimate blind spot. As humans, our default when interacting with other humans, especially those that are agreeable and friendly to us, is to trust them, and it works relatively well, unless you're interacting with a sociopath or, in this case, a machine.

> How much mundane (ie, non-suicidal) coercion of this nature is happening every day, but does not make the news because nothing interesting happened as a consequence?

A lot. Have you never heard of the advertising industry?