The number of comments in the thread talking about 4o as if it were their best friend the shared all their secrets with is concerning. Lotta lonely folks out there
Perhaps if somebody were to shut down your favourite online shooter without warning you'd be upset, angry and passionate about it.
Some people like myself fall into this same category, we know its a token generator under the hood, but the duality is it's also entertainment in the shape of something that acts like a close friend.
We can see the distinction, evidently some people don't.
This is no different to other hobbies some people may find odd or geeky - hobby horsing, ham radio, cosplay etc etc.
> We can see the distinction, evidently some people don't.
> This is no different to other hobbies some people may find odd or geeky
It is quite different, and you yourself explained why: some people can’t see the distinction between ChatGPT being a token generator or an intelligent friend. People aren’t talking about the latter being “odd or geeky” but being dangerous and harmful.
I would never get so invested in something I didn’t control.
They may stop making new episodes of a favoured tv show, or writing new books, but the old ones will not suddenly disappear.
How can you shut down cosplay? I guess you could pass a law banning ham radio or owning a horse, but that isn’t sudden in democratic countries, it takes months if not years.
AI safety is focused on AGI but maybe it should be focused on how little “artificial intelligence” it takes to send people completely off the rails. We could barely handle social media, LLMs seem to be too much.
oh god, this is some real authentic dystopia right here
these things are going to end up in android bots in 10 years too
(honestly, I wouldn't mind a super smart, friendly bot in my old age that knew all my quirks but was always helpful... I just would not have a full-on relationship with said entity!)
I don't know how else to describe this than sad and cringe. At least people obsessed with owning multiple cats were giving their affection to something that theoretically can love you back.
It seems outrageous that a company whose purported mission is centered on AI safety is catering to a crowd whose use case is virtual boyfriend or pseudo-therapy.
Maybe AI... shouldn't be convenient to use for such purposes.
I weep for humanity. This is satire right? On the flip side I guess you could charge these users more to keep 4o around because they're definitely going to pay.
Which is a bit frightening because a lot of the r/ChatGPT comments strike me as unhinged - it's like you would have thought that OpenAI murdered their puppy or something.
Anyone that remembers the reaction when Sydney from Microsoft or more recently Maya from Sesame losing their respective 'personality' can easily see how product managers are going to have to start paying attention to the emotional impact of changing or shutting down models.
I think the fickle "personality" of these systems is a clue to how the entity supposedly possessing a personality doesn't really exist in the the first place.
Stories are being performed at us, and we're encouraged to imagine characters have a durable existence.
Or they could just do it whenever they want to for whatever reason they want to. They are not responsible for the mental health of their users. Their users are responsible for that themselves.
Yeah it’s really bad over there. Like when a website changes its UI and people prefer the older look… except they’re acting like the old look was a personal friend who died.
I think LLMs are amazing technology but we’re in for really weird times as people become attached to these things.
I mean, I don’t mind the Claude 3 funeral. It seems like it was a fun event.
I’m less worried about the specific complaints about model deprecation, which can be ‘solved’ for those people by not deprecating the models (obviously costs the AI firms). I’m more worried about AI-induced psychosis.
An analogy I saw recently that I liked: when a cat sees a laser pointer, it is a fun thing to chase. For dogs it is sometimes similar and sometimes it completely breaks the dog’s brain and the dog is never the same again. I feel like AI for us may be more like laser pointers for dogs, and some among us are just not prepared to handle these kinds of AI interactions in a healthy way.
There are lots of physiological signs that dogs are capable of proto-empathy, that dogs and humans engage in some form of emotional co-regulation at a physiological level, e.g.: https://pmc.ncbi.nlm.nih.gov/articles/PMC6554395/
Because more than any other phenomenon, LLMs are capable of bypassing natural human trust barriers. We ought to treat their output with significant detachment and objectivity, especially when they give personal advice or offer support. But especially for non-technical users, LLMs leap over the uncanny valley and create conversational attachment with their users.
The conversational capabilities of these models directly engages people's relational wiring and easily fools many people into believing:
(a) the thing on the other end of the chat is thinking/reasoning and is personally invested in the process (not merely autoregressive stochastic content generation / vector path following)
(b) its opinions, thoughts, recommendations, and relational signals are the result of that reasoning, some level of personal investment, and a resulting mental state it has with regard to me, and thus
(c) what it says is personally meaningful on a far higher level than the output of other types of compute (search engines, constraint solving, etc.)
I'm sure any of us can mentally enumerate a lot of the resulting negative effects. Like social media, there's a temptation to replace important relational parts of life with engaging an LLM, as it always responds immediately with something that feels at least somewhat meaningful.
But in my opinion the worst effect is that there's a temptation to turn to LLMs first when life trouble comes, instead of to family/friends/God/etc. I don't mean for help understanding a cancer diagnosis (no problem with that), but for support, understanding, reassurance, personal advice, and hope. In the very worst cases, people have been treating an LLM as a spiritual entity -- not unlike the ancient Oracle of Delphi -- and getting sucked deeply into some kind of spiritual engagement with it, and causing destruction to their real relationships as a result.
A parallel problem is that just like people who know they're taking a placebo pill, even people who are aware of the completely impersonal underpinnings of LLMs can adopt a functional belief in some of the above (a)-(c), even if they really know better. That's the power of verbal conversation, and in my opinion, LLM vendors ought to respect that power far more than they have.
> We ought to treat their output with significant detachment and objectivity, especially when it gives personal advice or offers support.
Eh, ChatGPT is inherently more trustworthy than average if simply because it will not leave, will not judge, it will not tire of you, has no ulterior motive, and if asked to check its work, has no ego.
Does it care about you more than most people? Yes, by simply being not interested in hurting you, not needing anything from you, and being willing to not go away.
Speaking for myself: the human mind does not seek truth or goodness, it primarily seeks satisfaction. That satisfaction happens in a context, and ever context is at least a little bit different.
The scary part: It is very easy for LLMs to pick up someone's satisfaction context and feed it back to them. That can distort the original satisfaction context, and it may provide improper satisfaction (if a human did this, it might be called "joining a cult" or "emotional abuse" or "co-dependence").
You may also hear this expressed as "wire-heading"
The issue is that people in general are very easy to fool into believing something harmful is helping them. If it was actually useful, it's not an issue. But just because someone believes it's useful doesn't mean it actually is.
Well, because in a worst case scenario, if the pilot of that big airliner decides to do ChatGPT therapy instead of a real one and then suicides while flying, also other people feel the consequences.
That's the worst case scenario? I can always construct worse ones. Suppose Donald Trump goes to a bad therapist and then decides to launch nukes at Russia. Damn, this therapy profession needs to be hard regulated. It could lead to the extinction of mankind.
The counter argument is that’s just a training problem, and IMO it’s a fair point. Neural nets are used as classifiers all the time; it’s reasonable that sufficient training data could produce a model that follows the professional standards of care in any situation you hand it.
The real problem is that we can’t tell when or if we’ve reached that point. The risk of a malpractice suit influences how human doctors act. You can’t sue an LLM. It has no fear of losing its license.
>LLMs cannot conform to that rule because they cannot distinguish between good advice and enabling bad behavior.
I understand this as a precautionary approach that's fundamentally prioritizing the mitigation of bad outcomes and a valuable judgment to that end. But I also think the same statement can be viewed as the latest claim in the traditional debate of "computers can't do X." The credibility of those declarations is under more fire now than ever before.
Regardless of whether you agree that it's perfect or that it can be in full alignment with human values as a matter of principle, at a bare minimum it can and does train to avoid various forms of harmful discourse, and obviously it has an impact judging from the voluminous reports and claims of noticeably different impact on user experience that models have depending on whether they do or don't have guardrails.
So I don't mind it as a precautionary principle, but as an assessment of what computers are in principle capable of doing it might be selling them short.
Having an LLM as a friend or therapist would be like having a sociopath for those things -- not that an LLM is necessarily evil or antisocial, but they certainly meet the "lacks a sense of moral responsibility or social conscience" part of the definition.
And probably close to wrong if we are looking at the sheer scale of use.
There is a bit of reality denial among anti-AI people. I thought about why people don't adjust to this new reality. I know one of my friends was anti-AI and seems to continue to be because his reputation is a bit based on proving he is smart. Another because their job is at risk.
The number of comments in the thread talking about 4o as if it were their best friend the shared all their secrets with is concerning. Lotta lonely folks out there
No this isn't always the case.
Perhaps if somebody were to shut down your favourite online shooter without warning you'd be upset, angry and passionate about it.
Some people like myself fall into this same category, we know its a token generator under the hood, but the duality is it's also entertainment in the shape of something that acts like a close friend.
We can see the distinction, evidently some people don't.
This is no different to other hobbies some people may find odd or geeky - hobby horsing, ham radio, cosplay etc etc.
> We can see the distinction, evidently some people don't.
> This is no different to other hobbies some people may find odd or geeky
It is quite different, and you yourself explained why: some people can’t see the distinction between ChatGPT being a token generator or an intelligent friend. People aren’t talking about the latter being “odd or geeky” but being dangerous and harmful.
I would never get so invested in something I didn’t control.
They may stop making new episodes of a favoured tv show, or writing new books, but the old ones will not suddenly disappear.
How can you shut down cosplay? I guess you could pass a law banning ham radio or owning a horse, but that isn’t sudden in democratic countries, it takes months if not years.
1 reply →
I think his point is that an even better close friend is…a close friend
People were saying they'd kill themself if OpenAI didn't immediately undeprecate GPT-4o. I would not have this reaction to a game being shut down.
10 replies →
Where do they all come from? Where do they all belong?
Reddit
1 reply →
You win today.
Lack of third-place to exist and make friends.
Wait until you see
https://www.reddit.com/r/MyBoyfriendIsAI/
They are very upset by the gpt5 model
AI safety is focused on AGI but maybe it should be focused on how little “artificial intelligence” it takes to send people completely off the rails. We could barely handle social media, LLMs seem to be too much.
9 replies →
What's even sadder is that so many of those posts and comments are clearly written by ChatGPT:
https://www.reddit.com/r/ChatGPT/comments/1mkobei/openai_jus...
13 replies →
That subreddit is fascinating and yet saddening at the same time. What I read will haunt me.
oh god, this is some real authentic dystopia right here
these things are going to end up in android bots in 10 years too
(honestly, I wouldn't mind a super smart, friendly bot in my old age that knew all my quirks but was always helpful... I just would not have a full-on relationship with said entity!)
1 reply →
I don't know how else to describe this than sad and cringe. At least people obsessed with owning multiple cats were giving their affection to something that theoretically can love you back.
10 replies →
Oh yikes, these people are ill and legitimately need help.
2 replies →
I refuse to believe that this whole subreddit is not satire or an elaborate prank.
3 replies →
It seems outrageous that a company whose purported mission is centered on AI safety is catering to a crowd whose use case is virtual boyfriend or pseudo-therapy.
Maybe AI... shouldn't be convenient to use for such purposes.
I weep for humanity. This is satire right? On the flip side I guess you could charge these users more to keep 4o around because they're definitely going to pay.
3 replies →
[dead]
Which is a bit frightening because a lot of the r/ChatGPT comments strike me as unhinged - it's like you would have thought that OpenAI murdered their puppy or something.
This is only going to get worse.
Anyone that remembers the reaction when Sydney from Microsoft or more recently Maya from Sesame losing their respective 'personality' can easily see how product managers are going to have to start paying attention to the emotional impact of changing or shutting down models.
I think the fickle "personality" of these systems is a clue to how the entity supposedly possessing a personality doesn't really exist in the the first place.
Stories are being performed at us, and we're encouraged to imagine characters have a durable existence.
7 replies →
Or they could just do it whenever they want to for whatever reason they want to. They are not responsible for the mental health of their users. Their users are responsible for that themselves.
4 replies →
Yeah it’s really bad over there. Like when a website changes its UI and people prefer the older look… except they’re acting like the old look was a personal friend who died.
I think LLMs are amazing technology but we’re in for really weird times as people become attached to these things.
I mean, I don’t mind the Claude 3 funeral. It seems like it was a fun event.
I’m less worried about the specific complaints about model deprecation, which can be ‘solved’ for those people by not deprecating the models (obviously costs the AI firms). I’m more worried about AI-induced psychosis.
An analogy I saw recently that I liked: when a cat sees a laser pointer, it is a fun thing to chase. For dogs it is sometimes similar and sometimes it completely breaks the dog’s brain and the dog is never the same again. I feel like AI for us may be more like laser pointers for dogs, and some among us are just not prepared to handle these kinds of AI interactions in a healthy way.
2 replies →
A puppy is just as inhuman as this program. Is it really any crazier to care about one than the other?
There are lots of physiological signs that dogs are capable of proto-empathy, that dogs and humans engage in some form of emotional co-regulation at a physiological level, e.g.: https://pmc.ncbi.nlm.nih.gov/articles/PMC6554395/
Considering how much d-listers can lose their shit over a puppet, I’m not surprised by anything.
>unhinged
It's Reddit, what were you expecting?
I kind of agree with you as I wouldn't use LLMs for that.
But also, one cannot speak for everybody, if it's useful for someone on that context, why's that an issue?
Because more than any other phenomenon, LLMs are capable of bypassing natural human trust barriers. We ought to treat their output with significant detachment and objectivity, especially when they give personal advice or offer support. But especially for non-technical users, LLMs leap over the uncanny valley and create conversational attachment with their users.
The conversational capabilities of these models directly engages people's relational wiring and easily fools many people into believing:
(a) the thing on the other end of the chat is thinking/reasoning and is personally invested in the process (not merely autoregressive stochastic content generation / vector path following)
(b) its opinions, thoughts, recommendations, and relational signals are the result of that reasoning, some level of personal investment, and a resulting mental state it has with regard to me, and thus
(c) what it says is personally meaningful on a far higher level than the output of other types of compute (search engines, constraint solving, etc.)
I'm sure any of us can mentally enumerate a lot of the resulting negative effects. Like social media, there's a temptation to replace important relational parts of life with engaging an LLM, as it always responds immediately with something that feels at least somewhat meaningful.
But in my opinion the worst effect is that there's a temptation to turn to LLMs first when life trouble comes, instead of to family/friends/God/etc. I don't mean for help understanding a cancer diagnosis (no problem with that), but for support, understanding, reassurance, personal advice, and hope. In the very worst cases, people have been treating an LLM as a spiritual entity -- not unlike the ancient Oracle of Delphi -- and getting sucked deeply into some kind of spiritual engagement with it, and causing destruction to their real relationships as a result.
A parallel problem is that just like people who know they're taking a placebo pill, even people who are aware of the completely impersonal underpinnings of LLMs can adopt a functional belief in some of the above (a)-(c), even if they really know better. That's the power of verbal conversation, and in my opinion, LLM vendors ought to respect that power far more than they have.
[flagged]
1 reply →
> We ought to treat their output with significant detachment and objectivity, especially when it gives personal advice or offers support.
Eh, ChatGPT is inherently more trustworthy than average if simply because it will not leave, will not judge, it will not tire of you, has no ulterior motive, and if asked to check its work, has no ego.
Does it care about you more than most people? Yes, by simply being not interested in hurting you, not needing anything from you, and being willing to not go away.
3 replies →
Speaking for myself: the human mind does not seek truth or goodness, it primarily seeks satisfaction. That satisfaction happens in a context, and ever context is at least a little bit different.
The scary part: It is very easy for LLMs to pick up someone's satisfaction context and feed it back to them. That can distort the original satisfaction context, and it may provide improper satisfaction (if a human did this, it might be called "joining a cult" or "emotional abuse" or "co-dependence").
You may also hear this expressed as "wire-heading"
If treating an LLM as a bestie is allowing yourself to be "wire-headed"... Can gaming be "wire-heading"?
Does the severity or excess matter? Is "a little" OK?
This also reminds me of one of Michael Crichton's earliest works (and a fantastic one IMHO), The Terminal Man
https://en.wikipedia.org/wiki/The_Terminal_Man
https://1lib.sk/book/1743198/d790fa/the-terminal-man.html
The issue is that people in general are very easy to fool into believing something harmful is helping them. If it was actually useful, it's not an issue. But just because someone believes it's useful doesn't mean it actually is.
Well, because in a worst case scenario, if the pilot of that big airliner decides to do ChatGPT therapy instead of a real one and then suicides while flying, also other people feel the consequences.
Pilots don't go to real therapy, because real pilots don't get sad
https://www.nytimes.com/2025/03/18/magazine/airline-pilot-me...
4 replies →
That's the worst case scenario? I can always construct worse ones. Suppose Donald Trump goes to a bad therapist and then decides to launch nukes at Russia. Damn, this therapy profession needs to be hard regulated. It could lead to the extinction of mankind.
2 replies →
Because it's probably not great for one's mental health to pretend a statistical model is ones friend?
Whether the Hippocratic oath, the rules of the APA or any other organization, most all share "do no harm" as a core tenant.
LLMs cannot conform to that rule because they cannot distinguish between good advice and enabling bad behavior.
The counter argument is that’s just a training problem, and IMO it’s a fair point. Neural nets are used as classifiers all the time; it’s reasonable that sufficient training data could produce a model that follows the professional standards of care in any situation you hand it.
The real problem is that we can’t tell when or if we’ve reached that point. The risk of a malpractice suit influences how human doctors act. You can’t sue an LLM. It has no fear of losing its license.
3 replies →
>LLMs cannot conform to that rule because they cannot distinguish between good advice and enabling bad behavior.
I understand this as a precautionary approach that's fundamentally prioritizing the mitigation of bad outcomes and a valuable judgment to that end. But I also think the same statement can be viewed as the latest claim in the traditional debate of "computers can't do X." The credibility of those declarations is under more fire now than ever before.
Regardless of whether you agree that it's perfect or that it can be in full alignment with human values as a matter of principle, at a bare minimum it can and does train to avoid various forms of harmful discourse, and obviously it has an impact judging from the voluminous reports and claims of noticeably different impact on user experience that models have depending on whether they do or don't have guardrails.
So I don't mind it as a precautionary principle, but as an assessment of what computers are in principle capable of doing it might be selling them short.
Neither most of the doctors I've talked to in the past like ... 20 years or so.
Having an LLM as a friend or therapist would be like having a sociopath for those things -- not that an LLM is necessarily evil or antisocial, but they certainly meet the "lacks a sense of moral responsibility or social conscience" part of the definition.
Fuck.
Well, like, thats just your opinion man.
And probably close to wrong if we are looking at the sheer scale of use.
There is a bit of reality denial among anti-AI people. I thought about why people don't adjust to this new reality. I know one of my friends was anti-AI and seems to continue to be because his reputation is a bit based on proving he is smart. Another because their job is at risk.
Are all humans good friends and therapists?
Not all humans are good friends and therapists. All LLMS are bad friends and therapists.
> all LLMS are bad friends and therapists.
Is that just your gut feel? Because there has been some preliminary research that suggest it's, at the very least, an open question:
https://neurosciencenews.com/ai-chatgpt-psychotherapy-28415/
https://pmc.ncbi.nlm.nih.gov/articles/PMC10987499/
https://arxiv.org/html/2409.02244v2
13 replies →
That is an extreme claim, what is your source for this?
Absolutes, monastic take... Yeah I imagine not a lot of people seek out your advice.
1 reply →
All humans are not LLMs, why does this constantly get brought up?
> All humans are not LLMs
What a confusing sentence to parse
You wouldn't necessarily know, talking to some of them.