Any lawsuit you read, written by the plaintiff's attorneys, will be written with a tabloid level of sensationalism, cherry-picking, and "telling you how to feel". This is requested to be a jury trial, so on some level the game is "would you rather settle out of court (where hundreds of thousands are grains of sand on Google's level), or have a jury read our tabloid and decide while you, the faceless megacorporation, try to swim uphill against it."
To an extent, logs like this are incredibly personal - or at least I'd consider them such - so I'd understand if they're not being released publicly for many reasons relating to that.
The kind of vulnerability that shows when someone is susceptible to influences like this isn't exactly the kind of thing you'd want to widely publicize about someone you loved, you know?
Yeah this seems as clear cut a case as you could want. That doesn't automatically mean Google is going to get held liable but if any case would result in it this one will.
I think it’s already time for us to stop calling these things "intelligent" or using the word intelligence when referring to LLMs. These tools are very dangerous for people who are mentally fragile.
I try to avoid calling LLMs intelligent when unnecessary, but it runs into the fundamental problem that they are intelligent by any common-sense definition of the term. The only way to defend the thesis that they aren't is to retreat to esoteric post-2022 definitions of intelligence, which take into account this new phenomenon of a machine that can engage in medium-quality discussions on any topic under the sun but can't count reliably.
I don't have a WSJ subscription, but other coverage of this story (https://www.theguardian.com/technology/2026/mar/04/gemini-ch...) makes it clear that Gemini's intelligence was precisely the problem in this case; a less intelligent chatbot would not have been able to create the detailed, immersive narrative the victim got trapped in.
It's interesting how the Turing Test was pretty widely accepted as a way to evaluate machine intelligence, and then quietly abandoned pretty much instantly once machines were able to pass it. I don't even necessarily think that was incorrect, but it's interesting how rapidly views changed.
Dijkstra said, "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." Well, we have some very fish-y submarines these days. But the point still holds. Rather than worry about whether these things qualify as "intelligent," look at their actual capabilities. That's what matters.
Sure but my father isn't asking his fellow humans unanswerable questions about God and the universe. People don't treat other people as omnipotent, but they sure as hell treat LLMs as though they are.
So is television. So are books. Vulnerable people shouldn't have unfettered access to things that can lead to dangerous feedback loops and losing their grasp on reality.
People who are vulnerable to this type of thing need caretakers, or to be institutionalized. These aren't just average, every day random people getting taken out by AIs, they have existing, extreme mental illness. They need to have their entire routine curated and managed, preventing them from interacting with things that can result in dangerous outcomes. Anything that can trigger obsessive behaviors, paranoid delusions, etc.
They're not just fragile, they're unable to effectively engage with reality on their own. Sometimes the right medication and behavioral training gets them to a point where they can have limited independence, but often times, they need a lifetime of supervision.
Telenovelas, brand names, celebrities, specific food items, a word - AI is just the latest thing in a world full of phenomena that can utterly consume their reality.
Gavalas seems to have had a psychotic break, was likely susceptible to schizophrenia, or had other conditions, and spiraled out. AI is just a convenient target for lawyers taking advantage of the grieving parents, who want an explanation for what happened that doesn't involve them not recognizing their son's mental breakdown and intervening, or to confront being powerless despite everything they did to intervene.
Sometimes bad things happen. To good people, too.
If he'd used Bic pens to write his plans for mass shootings, should Bic be held responsible? What if he used Microsoft Word to write his suicide note? If he googled things that in context, painted a picture of planning mass murder and suicide, should Google be held accountable for not notifying authorities? Why should the use of AI tools be any different?
Google should not be surveilling users and making judgments about legality or ethicality or morality. They shouldn't be intervening without specific warrants and legal oversight by proper authorities within the constraints of due process.
Google isn't responsible for this guy's death because he spiraled out while using Gemini. We don't want Google, or any other AI platform, to take that responsibility or to engage in the necessary invasive surveillance in order to accomplish that. That's absurd and far more evil than the tragedy of one man dying by suicide and using AI through the process.
You don't want Google or OpenAI making mental health diagnoses, judgments about your state of mind, character, or agency, and initiating actions with legal consequences. You don't want Claude or ChatGPT initiating a 5150, or triggering a welfare check, because they decided something is off about the way you're prompting, and they feel legally obligated to go that far because they want to avoid liability.
I hope this case gets tossed, but also that those parents find some sort of peace, it's a terrible situation all around.
> If he'd used Bic pens to write his plans for mass shootings, should Bic be held responsible?
I think the scale of the assistance is important. If his Bic pen was encouraging him to mass murder people, then Bic should absolutely be held accountable.
> These aren't just average, every day random people getting taken out by AIs, they have existing, extreme mental illness.
How do you know that? The concern is precisely that this isn't the case, and LLM roleplay is capable of "hooking" people going through psychologically normal sadness or distress. That's what the family believes happened in this story.
Just stuff anyone with mental illness into an institution. That worked out so well last time. Or maybe make healthcare affordable and accessible. That seems like a way more obvious detriment to negative outcomes.
I broadly agree with you, but your views on mental illness are not good.
I have had conversations where the bot started with a firm opinion but reversed in a prompt or two, always toward my point of view.
So I asked it if the sycophancy is inherent in the design, or if it just comes from the RLHF. It claimed that it's all about the RLHF, and that the sycophancy is a business decision that is a compromise of a variety of forces.
Is that right? It would at least mean that this is technically a solvable problem.
I don’t think it is. The thing that needs to be kept in mind is that at the end of the day the basic building block of the AI systems is a fancy autocomplete. And I’m not saying this to somehow diminish it. It just means that it’s going to produce the statistically most likely continuation to a given source text. So if you keep pressing on with your point of view, it gets more and more likely for the statistically likely conversation to start agreeing with you. Unless there’s something in the context window that makes you obviously wrong.
Every chatbot I have tried (ChatGPT, Gemini, Claude, etc.) Starts to spew out suicide hotlines and "I'm sorry Dave, I can't do that" the moment I start to talk about anything like suicide. What I am doing wrong?
Any mental illness mixed with delusions is likely going to end badly. Whether they think Gemini is alive, a video game is real life or that Bjork loves them without ever talking to or meeting them. While LLM's are interactive and listening to an album isn't I don't think there is a fix to this outside posting a warning after every prompt "I am not a real person, if you have mental issues please contact your doctor of emergency services." Which I think is about as useful as a sign in a casino next to the cash out counter that says if you have a problem call this number.
I'm more inclined to believe that this case is getting amplified in MSM because it fits an agenda. Like the people who got hurt using black market vapes. Boosting those stories and making it seem like an epidemic supports whatever message they want to send. Which usually involves money somewhere.
> I'm more inclined to believe that this case is getting amplified in MSM because it fits an agenda.
I mean tech in general has been negatively covered in the media since 2015 due to latent agendas of (a) supposed revenue loss due to existence of Google/FB etc and (b) to align neutral moderation stances towards a preferred viewpoint most suitable to the political party in question.
There is a solution, however, anyone hoping to roleplay with models submits an identity verification, an escrow amount, and a recorded statement acknowledging their risky use of the model. However, I assume the market for this is not insignificant, and therefore, companies hope to avoid putting in such requirements. OpenAI has been moving in that direction as seen during the 4o debacle.
But how would your solution have helped in this case?
The guy was probably a paying user, so Google would have already known who he is. He's also 36, so no excluding him based on age. And neither the escrow nor the statement really add much in my view
Notable features of this case:
- Documented record of a months-long set of conversations between the man and the chatbot
- Seemingly, no previous history of mental illness
- The absolutely crazy things the AI encouraged him to do, including trying to kidnap a robot body for the AI
- Eventually encouraging (or at the very least going along with) his plans to kill himself.
We would need to see the actual chat logs.
Any lawsuit you read, written by the plaintiff's attorneys, will be written with a tabloid level of sensationalism, cherry-picking, and "telling you how to feel". This is requested to be a jury trial, so on some level the game is "would you rather settle out of court (where hundreds of thousands are grains of sand on Google's level), or have a jury read our tabloid and decide while you, the faceless megacorporation, try to swim uphill against it."
To an extent, logs like this are incredibly personal - or at least I'd consider them such - so I'd understand if they're not being released publicly for many reasons relating to that.
The kind of vulnerability that shows when someone is susceptible to influences like this isn't exactly the kind of thing you'd want to widely publicize about someone you loved, you know?
1 reply →
Yeah this seems as clear cut a case as you could want. That doesn't automatically mean Google is going to get held liable but if any case would result in it this one will.
Charles Manson never actually killed anyone. Why can't AI be held accountable for the same reasons?
Because AI can't be held accountable. Ever.
That will probably save some jobs, but it's a problem in most other contexts.
1 reply →
I think it’s already time for us to stop calling these things "intelligent" or using the word intelligence when referring to LLMs. These tools are very dangerous for people who are mentally fragile.
We should stop using any term that ascribes a description that could make them seem human... period.
Nonsensical terms like the thing is 'thinking'? Seriously. Cut the crap.
I try to avoid calling LLMs intelligent when unnecessary, but it runs into the fundamental problem that they are intelligent by any common-sense definition of the term. The only way to defend the thesis that they aren't is to retreat to esoteric post-2022 definitions of intelligence, which take into account this new phenomenon of a machine that can engage in medium-quality discussions on any topic under the sun but can't count reliably.
I don't have a WSJ subscription, but other coverage of this story (https://www.theguardian.com/technology/2026/mar/04/gemini-ch...) makes it clear that Gemini's intelligence was precisely the problem in this case; a less intelligent chatbot would not have been able to create the detailed, immersive narrative the victim got trapped in.
It's interesting how the Turing Test was pretty widely accepted as a way to evaluate machine intelligence, and then quietly abandoned pretty much instantly once machines were able to pass it. I don't even necessarily think that was incorrect, but it's interesting how rapidly views changed.
Dijkstra said, "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." Well, we have some very fish-y submarines these days. But the point still holds. Rather than worry about whether these things qualify as "intelligent," look at their actual capabilities. That's what matters.
7 replies →
So are a lot of humans.
Sure but my father isn't asking his fellow humans unanswerable questions about God and the universe. People don't treat other people as omnipotent, but they sure as hell treat LLMs as though they are.
1 reply →
People have bowel movements, too; should we be building a machine that produces fecal matter at an industrial scale?
What a silly comparison.
So is television. So are books. Vulnerable people shouldn't have unfettered access to things that can lead to dangerous feedback loops and losing their grasp on reality.
People who are vulnerable to this type of thing need caretakers, or to be institutionalized. These aren't just average, every day random people getting taken out by AIs, they have existing, extreme mental illness. They need to have their entire routine curated and managed, preventing them from interacting with things that can result in dangerous outcomes. Anything that can trigger obsessive behaviors, paranoid delusions, etc.
They're not just fragile, they're unable to effectively engage with reality on their own. Sometimes the right medication and behavioral training gets them to a point where they can have limited independence, but often times, they need a lifetime of supervision.
Telenovelas, brand names, celebrities, specific food items, a word - AI is just the latest thing in a world full of phenomena that can utterly consume their reality.
Gavalas seems to have had a psychotic break, was likely susceptible to schizophrenia, or had other conditions, and spiraled out. AI is just a convenient target for lawyers taking advantage of the grieving parents, who want an explanation for what happened that doesn't involve them not recognizing their son's mental breakdown and intervening, or to confront being powerless despite everything they did to intervene.
Sometimes bad things happen. To good people, too.
If he'd used Bic pens to write his plans for mass shootings, should Bic be held responsible? What if he used Microsoft Word to write his suicide note? If he googled things that in context, painted a picture of planning mass murder and suicide, should Google be held accountable for not notifying authorities? Why should the use of AI tools be any different?
Google should not be surveilling users and making judgments about legality or ethicality or morality. They shouldn't be intervening without specific warrants and legal oversight by proper authorities within the constraints of due process.
Google isn't responsible for this guy's death because he spiraled out while using Gemini. We don't want Google, or any other AI platform, to take that responsibility or to engage in the necessary invasive surveillance in order to accomplish that. That's absurd and far more evil than the tragedy of one man dying by suicide and using AI through the process.
You don't want Google or OpenAI making mental health diagnoses, judgments about your state of mind, character, or agency, and initiating actions with legal consequences. You don't want Claude or ChatGPT initiating a 5150, or triggering a welfare check, because they decided something is off about the way you're prompting, and they feel legally obligated to go that far because they want to avoid liability.
I hope this case gets tossed, but also that those parents find some sort of peace, it's a terrible situation all around.
> If he'd used Bic pens to write his plans for mass shootings, should Bic be held responsible?
I think the scale of the assistance is important. If his Bic pen was encouraging him to mass murder people, then Bic should absolutely be held accountable.
> Why should the use of AI tools be any different?
Because none of the tools you mentioned are crazily marketed as intelligent
You have a valid point, but it has nothing to do with what I said, both our arguments can be true at the same time
5 replies →
> These aren't just average, every day random people getting taken out by AIs, they have existing, extreme mental illness.
How do you know that? The concern is precisely that this isn't the case, and LLM roleplay is capable of "hooking" people going through psychologically normal sadness or distress. That's what the family believes happened in this story.
7 replies →
Just stuff anyone with mental illness into an institution. That worked out so well last time. Or maybe make healthcare affordable and accessible. That seems like a way more obvious detriment to negative outcomes.
I broadly agree with you, but your views on mental illness are not good.
2 replies →
Blame the victims! If they were better or did the right things instead of the wrong things they wouldn't have been victimized!
I have had conversations where the bot started with a firm opinion but reversed in a prompt or two, always toward my point of view.
So I asked it if the sycophancy is inherent in the design, or if it just comes from the RLHF. It claimed that it's all about the RLHF, and that the sycophancy is a business decision that is a compromise of a variety of forces.
Is that right? It would at least mean that this is technically a solvable problem.
You can get vastly different outputs claiming that the input is yours or not.
I don’t think it is. The thing that needs to be kept in mind is that at the end of the day the basic building block of the AI systems is a fancy autocomplete. And I’m not saying this to somehow diminish it. It just means that it’s going to produce the statistically most likely continuation to a given source text. So if you keep pressing on with your point of view, it gets more and more likely for the statistically likely conversation to start agreeing with you. Unless there’s something in the context window that makes you obviously wrong.
Every chatbot I have tried (ChatGPT, Gemini, Claude, etc.) Starts to spew out suicide hotlines and "I'm sorry Dave, I can't do that" the moment I start to talk about anything like suicide. What I am doing wrong?
anyone got a non paywalled/subscription version?
https://www.wsj.com/tech/ai/gemini-ai-wrongful-death-lawsuit... is the Gift Article link. This was what I submitted, but the query params got stripped.
Thanks it didn't work with archive.is/org for me either
Any mental illness mixed with delusions is likely going to end badly. Whether they think Gemini is alive, a video game is real life or that Bjork loves them without ever talking to or meeting them. While LLM's are interactive and listening to an album isn't I don't think there is a fix to this outside posting a warning after every prompt "I am not a real person, if you have mental issues please contact your doctor of emergency services." Which I think is about as useful as a sign in a casino next to the cash out counter that says if you have a problem call this number.
I'm more inclined to believe that this case is getting amplified in MSM because it fits an agenda. Like the people who got hurt using black market vapes. Boosting those stories and making it seem like an epidemic supports whatever message they want to send. Which usually involves money somewhere.
I like to think that Bjork loves me, but deep down I know it's not true.
> I'm more inclined to believe that this case is getting amplified in MSM because it fits an agenda.
I mean tech in general has been negatively covered in the media since 2015 due to latent agendas of (a) supposed revenue loss due to existence of Google/FB etc and (b) to align neutral moderation stances towards a preferred viewpoint most suitable to the political party in question.
There is a solution, however, anyone hoping to roleplay with models submits an identity verification, an escrow amount, and a recorded statement acknowledging their risky use of the model. However, I assume the market for this is not insignificant, and therefore, companies hope to avoid putting in such requirements. OpenAI has been moving in that direction as seen during the 4o debacle.
But how would your solution have helped in this case?
The guy was probably a paying user, so Google would have already known who he is. He's also 36, so no excluding him based on age. And neither the escrow nor the statement really add much in my view
[flagged]
[flagged]
I just don't think the WSJ could resist putting "Florida man" in the standfirst of TFA.