Comment by falcor84
8 days ago
> An LLM can and will offer eventual suicide options for depressed people.
"An LLM" can be made to do whatever, but from what I've seen, modern versions of ChatGPT/Gemini/Claude have very strong safeguards around that. It will still likely give people inappropriate advice, but not that inappropriate.
No, it does get that inappropriate when talked to that much.
https://futurism.com/commitment-jail-chatgpt-psychosis
Post hoc ergo propter hoc. Just because a man had a psychotic episode after using an AI does not mean he had a psychotic episode because of the AI. Without knowing more than what the article tells us, chances are these men had the building blocks for a psychotic episode laid out for him before he ever took up the keyboard.
Lots of usage of "no prior history"
> Her husband, she said, had no prior history of mania, delusion, or psychosis.
> Speaking to Futurism, a different man recounted his whirlwind ten-day descent into AI-fueled delusion, which ended with a full breakdown and multi-day stay in a mental care facility. He turned to ChatGPT for help at work; he'd started a new, high-stress job, and was hoping the chatbot could expedite some administrative tasks. Despite being in his early 40s with no prior history of mental illness, he soon found himself absorbed in dizzying, paranoid delusions of grandeur, believing that the world was under threat and it was up to him to save it.
https://archive.is/WIqEr
> Mr. Torres, who had no history of mental illness that might cause breaks with reality, according to him and his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the chatbot how to do that and told it the drugs he was taking and his routines. The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a “temporary pattern liberator.” Mr. Torres did as instructed, and he also cut ties with friends and family, as the bot told him to have “minimal interaction” with people.
2 replies →
Invoking post hoc ergo propter hoc is a textbook way to dismiss an inconvenience to the LLM industrial complex.
LLMs will tell users, "good, you're seeing the cracks", "you're right", the "fact you are calling it out means you are operating at a higher level of self awareness than most" (https://x.com/nearcyan/status/1916603586802597918).
Enabling the user in this way is not a passive variable. It is an active agent that validated paranoid ideation, reframed a break from reality as a virtue, and provided authoritative confirmation using all prior context about the user. LLMs are a bespoke engine for amplifying cognitive distortion, and to suggest their role is coincidental is to ignore the mechanism of action right in front of you.
1 reply →