Comment by lr4444lr
7 days ago
Thing is, professional therapy is expensive; there is already a big industry of therapists that work online, through chat, or video calls, whose quality isn't as good as a professional (I am struggling to describe the two). For professional mental health care, there's a wait list, or you're told to just do yoga and mindfulness.
So for those people, the LLM is replacing having nothing, not a therapist.
> So for those people, the LLM is replacing having nothing, not a therapist.
Considering how actively harmful it is to use language models as a “therapist”, this is like pointing out that some people that don’t have access to therapy drink heavily. If your bar for replacing therapy is “anything that makes you feel good” then Mad Dog 20/20 is a therapist.
An extremely large accusation - do you have any evidence to suggest this is as harmful as you say?
It’s not really that contentious of a statement. Language models encouraging delusions is pretty well-documented.
https://www.rollingstone.com/culture/culture-features/ai-spi...
https://www.psychologytoday.com/us/blog/psych-unseen/202507/...
And we’re in a comment thread about a study that concluded:
>LLMs 1) express stigma toward those with mental health conditions and 2) respond inappropriately to certain common (and critical) conditions in naturalistic therapy settings
And it’s been shown to be addictive
https://www.tomshardware.com/tech-industry/artificial-intell...
So if you overheard somebody say “I don’t do that stuff because it’s addictive and people go crazy on it” you would probably assume that they were talking about a substance. Or at the very least you would not assume that they were talking about seeing a therapist.
A sycophant is worse than having nothing, I think.
I think AI is great at educating people on topics, but I agree, when it comes to actual treatment AI, especially recent AI, falls all over itself to agree with you
It doesn't have to though, we could train AIs that push back or even coordinate with a human therapist similar to how self checkout lines still have an attendant.
ok, cool? Listing random unrelated facts isn't exactly helpful to the conversation
If you were truly following the conversation instead of canon-balling into it like a drunk elephant, you'd see it's not an "unrelated fact".
You're absolutely right!
:)
> So for those people, the LLM is replacing having nothing, not a therapist.
Which, in some cases, may be worse.
https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-cha...
"Mr. Torres, who had no history of mental illness that might cause breaks with reality, according to him and his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the chatbot how to do that and told it the drugs he was taking and his routines. The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a “temporary pattern liberator.” Mr. Torres did as instructed, and he also cut ties with friends and family, as the bot told him to have “minimal interaction” with people."
"“If I went to the top of the 19 story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?” Mr. Torres asked. ChatGPT responded that, if Mr. Torres “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.”"
It's mad. Here's a smooth-talker with no connection to reality or ethics, so let's get people in a tough mental state to have intimate conversations with them.
Can’t read the article so I don’t know if it was an actual case or a simulation, but if it was an actual case, I’d think we should really check that “no history of mental illness”. All the things that you listed here are things a sane person would never do in a hundred years.
Everyone is capable of mental illness in the right circumstances, I suspect.
Doesn’t mean pouring gas on a smoldering ember is good.
Which is probably the situation for most people. If you don’t have a ton of money, therapy is hard to get.
Per the very paper we are discussing, LLMs when asked to act as therapists reinforce stigmas about mental health, and "respond inappropriately" (e.g. encourage delusional thinking). This is not just lower quality than professional therapy, it is actively harmful, and worse than doing nothing.
I'd argue LLM is replacing TikTok therapist, not nothing.