Comment by fwip
7 months ago
The first link says that patients can't reliably tell which is the therapist and which is LLM in single messages, which yeah, that's an LLM core competency.
The second is "how 2 use AI 4 therapy" which, there's at least one paper for every field like that.
The last found that they were measurably worse at therapy than humans.
So, yeah, I'm comfortable agreeing that all LLMs are bad therapists, and bad friends too.
there's also been a spate of reports like this one recently https://www.papsychotherapy.org/blog/when-the-chatbot-become...
which is definitely worse than not going to a therapist
If I think "it understands me better than any human", that's dissociation? Oh boy. And all this time while life has been slamming me with unemployment while my toddler is at the age of maximum energy-extraction from me (4), devastating my health and social life, I thought it was just a fellow-intelligence lifeline.
Here's a gut-check anyone can do, assuming you use a customized ChatGPT4o and have lots of conversations it can draw on: Ask it to roast you, and not to hold back.
If you wince, it "knows you" quite well, IMHO.
It sounds like you might be quite lonely recently. It's nice to have an on-demand chatbot that feels like socialization, I get it. But an LLM doesn't "know you," and thinking that it does is one of the first steps toward the problems described in that article.
1 reply →
Ironically an AI written article.