Comment by DanielVZ

2 months ago

I do think we need to be hyper focused on this. We do not need more ways for people to be convinced of suicide. This is a huge misalignment of objectives and we do not know what other misalignment issues are already more silently happening or may appear in the future as AI capabilities evolve.

Also we can’t deny the emotional element. Even though it is subjective, knowing that the reason your daughter didn’t seek guidance from you and committed suicide was because a chatbot convinced her of so must be gut wrenching. So far I’ve seen two instances of attempted suicide driven by AI in my small social circle. And it has made me support banning general AI usage at times.

Nowadays I’m not sure if it should or even could be banned, but we DO have to invest significant resources to improve alignment, otherwise we risk that in the future AI does more harm than good.

Hard question to answer imo but at a high level I would argue that social media for folks under 18 is even more harmful than LLMs.

It is quite fascinating and I hope more studies exist that look into why some folks are more susceptible to this type of manipulation.

  • Respectfully I disagree there. Social media is dangerous and corrosive to a healthy mind, but AI is like a rapidly adaptive cancer if you don't recognize it for what it is.

    Reading accounts from people who fell into psychosis induced by LLMs feels like a real time mythological demon whispering insanities and temptations into the ear directly, in a way that algorithmically recommended posts from other people could never match.

    It will naturally mimic your biases. It will find the most likely response for you to keep engaging with it. It will tell you everything you want to hear, even if it is not based in reality. In my mind it's the same dangers of social media but dialed all the way up to 11.

  • Oh you are absolutely right. I’m not sure yet if it IS more harmful but it has had time to do so much more harm.

    Starting with dumb challenges that risk children and their families life.

    And don’t get me started with how algorithms don’t care about the wellbeing of users, so if it’s depressing content that drives engagement, users life is just a tiny sacrifice in favor the companies profits.

  • "I would argue that social media for folks under 18 is even more harmful than LLMs."

    Well, it turns out all the social media companies are also the LLM companies and they are adding LLMs to social media, so....

I largely agree with what you’re saying. Certainly alignment should be improved to never encourage suicide.

But I also think we should consider the broader context. Suicide isn’t new, and it’s been on the rise. I’ve suffered from very dark moments myself. It’s a deep, complex issue, inherently tied to technology. But it’s more than that. For me, it was not having an emotionally supportive environment that led to feelings of deep isolation. And it’s very likely that part of why I expanded beyond my container was because I had access to ideas on the internet that my parents never did.

I never consulted AI in these dark moments, I didn’t have the option, and honestly that may have been for the best.

And you might be right. Pointed bans, for certain groups and certain use cases might make sense. But I hear a lot of people calling for a global ban, and that concerns me.

Considering how we improve the broad context, I genuinely see AI as having potential for creating more aware, thoughtful, and supportive people. That’s just based on how I use AI personally, it genuinely helps me refine my character and process trauma. But I had to earn that ability through a lot of suffering and maturing.

I don’t really have a point. Other than admitting my original comment used logical fallacies, but I didn’t intend to diminish the complexity of this conversation. But I did. And it is clearly a very complex issue.

>I’ve seen two instances of attempted suicide driven by AI in my small social circle

Christ, that's a lot. My heart goes out to you and I understand if you prefer not to answer, but could you tell more about how the AI-aspect played out? How did you find out that AI was involved?

  • I was going to write a full answer with all details but at some point it gets too personal so I’ll just answer the questions briefly.

    > but could you tell more about how the AI-aspect played out?

    So in summary the AI sycophantically agreed with how there was no way out of the situations and how nobody understood their position further isolating them. And when they contemplated suicide it did assist on the method selection with no issues whatsoever.

    > How did you find out that AI was involved?

    The victims mentioned it and the chat logs are there.

    • The problem is, if you want to reduce suicide, the best place to start would not be by banning AI (very neutral tech, responds to what you want it to do) but by censoring climatologists (who constantly try to convince people the world is ending and there's no hope for anyone).

      I'm not interested in hearing about the effect of AI encouraging suicide until the problem of academics encouraging suicide are addressed first as the causal link is much stronger.

Did you know that 5% of all deaths in Canada is by elective suicide?

  • On one hand it shows terrible inadequacies of Canadian health care. On the other would it be better to force people to suffer till the natural end of their lives that are terrible because of those inadequacies? Healthcare won't get significantly better soon enough for them anyways. It seems better to "discover" what percentage of people want to end their lives in current conditions and improve those conditions to improve that percentage. That might be a very powerful measure of how good we are doing with added benefit of not forcing suffering people to suffer longer.

    • Been thinking about this for years.

      It's easy to think that any % > 0 is a sign of something having gone wrong. My default guess used to be that, too.

      But imagine a perfect health system: when all other causes of death are removed, what else remains?

      If by "terrible inadequacies of Canadian health care" you mean they've not yet solved aging, not yet cured all diseases, and not yet developed instant-response life-saving kits for all accidents up to and including total body disruption, then yes, any less than 100% is a sign of terrible inadequacies.

      1 reply →

There are a lot of edge cases where suicide is rational. The experience of watching an 80 year old die over the course of a month or few can be quite harrowing from the reports I've had from people who've witnessed it; most of whom talk like they'd rather die in some other way. It's a scary thought, but we all die and there isn't any reason it has to be involuntary all the way to the bitter end.

It is quite difficult to say what moral framework an AI should be given. Morals are one of those big unsolved problems. Even basic ideas like maybe optimising for the general good if there are no major conflicting interests are hard to come to a consensus on. The public dialog is a crazy place.

  • The stories coming out are about convincing high school boys with impressionable brains into committing suicide, not about having intellectual conversations with 80 year olds about whether suicide to avoid gradual mental and physical decline makes sense.

    • Yeah, that is why I wrote the comment. The stories are about one case where the model behaviour doesn't make sense - but there are other cases where the same behaviour is correct.

      As jb_rad said in the thread root, hyper-focusing on the risk will lead people to overreact. DanielVZ says we should hyper focus, maybe even overreact to the point of banning AI because it can persuade people to suicide. However the best view to do is acknowledge the nuance where sometimes suicide is actually the best decision and it is just a matter of getting as close as possible to the right line.

> We do not need more ways for people to be convinced of suicide.

I am convinced (no evidence though) that current LLMs has prevented, possibly lots of, suicides. I don't know if anyone has even tried to investigate or estimate those numbers. We should still strive to make them "safer" but with most tech there's positives and negatives. How many, for example, has calmed their nerves by getting in a car and driven for an hour alone and thus not committed suicide or murder.

That said there's the reverse for some pharmaceutical drugs. Take statins for cholesterol, lots of studies for how many deaths they prevent, few if any on comorbidity.