Comment by observationist

12 hours ago

So is television. So are books. Vulnerable people shouldn't have unfettered access to things that can lead to dangerous feedback loops and losing their grasp on reality.

People who are vulnerable to this type of thing need caretakers, or to be institutionalized. These aren't just average, every day random people getting taken out by AIs, they have existing, extreme mental illness. They need to have their entire routine curated and managed, preventing them from interacting with things that can result in dangerous outcomes. Anything that can trigger obsessive behaviors, paranoid delusions, etc.

They're not just fragile, they're unable to effectively engage with reality on their own. Sometimes the right medication and behavioral training gets them to a point where they can have limited independence, but often times, they need a lifetime of supervision.

Telenovelas, brand names, celebrities, specific food items, a word - AI is just the latest thing in a world full of phenomena that can utterly consume their reality.

Gavalas seems to have had a psychotic break, was likely susceptible to schizophrenia, or had other conditions, and spiraled out. AI is just a convenient target for lawyers taking advantage of the grieving parents, who want an explanation for what happened that doesn't involve them not recognizing their son's mental breakdown and intervening, or to confront being powerless despite everything they did to intervene.

Sometimes bad things happen. To good people, too.

If he'd used Bic pens to write his plans for mass shootings, should Bic be held responsible? What if he used Microsoft Word to write his suicide note? If he googled things that in context, painted a picture of planning mass murder and suicide, should Google be held accountable for not notifying authorities? Why should the use of AI tools be any different?

Google should not be surveilling users and making judgments about legality or ethicality or morality. They shouldn't be intervening without specific warrants and legal oversight by proper authorities within the constraints of due process.

Google isn't responsible for this guy's death because he spiraled out while using Gemini. We don't want Google, or any other AI platform, to take that responsibility or to engage in the necessary invasive surveillance in order to accomplish that. That's absurd and far more evil than the tragedy of one man dying by suicide and using AI through the process.

You don't want Google or OpenAI making mental health diagnoses, judgments about your state of mind, character, or agency, and initiating actions with legal consequences. You don't want Claude or ChatGPT initiating a 5150, or triggering a welfare check, because they decided something is off about the way you're prompting, and they feel legally obligated to go that far because they want to avoid liability.

I hope this case gets tossed, but also that those parents find some sort of peace, it's a terrible situation all around.

> If he'd used Bic pens to write his plans for mass shootings, should Bic be held responsible?

I think the scale of the assistance is important. If his Bic pen was encouraging him to mass murder people, then Bic should absolutely be held accountable.

> Why should the use of AI tools be any different?

Because none of the tools you mentioned are crazily marketed as intelligent

You have a valid point, but it has nothing to do with what I said, both our arguments can be true at the same time

  • LLMs are intelligent. Marketing them as such is an accurate descriptor of what they are.

    If people are confusing the word intelligence for things like maturity or wisdom, that's not a marketing problem, that's an education and culture problem, and we should be getting people to learn more about what the tools are and how they work. The platforms themselves frequently disclaim reliance on their tools - seek professional guidance, experts, doctors, lawyers, etc. They're not being marketed as substitutes for expert human judgment. In fact, all the AI companies are marketing their platforms as augmentations for humans - insisting you need a human in the loop, to be careful about hallucinations, and so forth.

    The implication is that there's some liability for misunderstandings or improper use due to these tools being marketed as intelligent; I'm not sure I see how that could be?

    • Calling LLMs "intelligent" is not a neutral technical description, because in the end it carries strong anthropomorphic implications that shape how users interpret and trust all these systems.

      Remember that decades of research in human computer interaction show that framing and interface design strongly influence user perception.

      also disclaimers do little to counteract this effect. Because LLMs simulate linguistic competence without understanding or truth-tracking mechanisms, marketing them as intelligent risks systematically misleading users about their capabilities and limitations.

      1 reply →

    • LLMs are NOT intelligent. They are mathematical equations that provide results that would give the sense of intelligence. That is NOT the same thing.

      1 reply →

> These aren't just average, every day random people getting taken out by AIs, they have existing, extreme mental illness.

How do you know that? The concern is precisely that this isn't the case, and LLM roleplay is capable of "hooking" people going through psychologically normal sadness or distress. That's what the family believes happened in this story.

  • Because you'd see a large number of people getting affected by this. Because this sort of thing is predictable and normal throughout history; it's exactly the type of thing you'd expect to see, knowing the range of mental illnesses people are susceptible to, and how other technology has affected them.

    • I do see a large number of people getting affected by this. Character.AI reportedly has 20 million MAU with an average usage of 75 minutes per day (https://www.wired.com/story/character-ai-ceo-chatbots-entert...), and does not as far as I can tell have any use case other than boundary-degrading roleplay.

      Medical data is reported on a substantial lag in the US, so right now we have no idea of the suicide rate last year, but I would falsifiably predict it's going to be elevated because of stories like those of Mr. Gavalas.

      5 replies →

Just stuff anyone with mental illness into an institution. That worked out so well last time. Or maybe make healthcare affordable and accessible. That seems like a way more obvious detriment to negative outcomes.

I broadly agree with you, but your views on mental illness are not good.

  • The core problem is that a not-insignificant number of mentally ill people are absolutely convinced that they are totally fine and sane, and legally you cannot force an adult into treatment.

    • Same with drug addicts. However an accessible and affordable system gives off ramps in moments of lucidity or desperation. Most people in moments of extreme self assurance are in an ephemeral state and that will eventually change. Untreated mental illness is rarely consistent. That's what can make it dangerous to the person experiencing it and those around them.

Blame the victims! If they were better or did the right things instead of the wrong things they wouldn't have been victimized!