← Back to context

Comment by jelder

7 months ago

Well, good, because these things make bad friends and worse therapists.

The number of comments in the thread talking about 4o as if it were their best friend the shared all their secrets with is concerning. Lotta lonely folks out there

  • No this isn't always the case.

    Perhaps if somebody were to shut down your favourite online shooter without warning you'd be upset, angry and passionate about it.

    Some people like myself fall into this same category, we know its a token generator under the hood, but the duality is it's also entertainment in the shape of something that acts like a close friend.

    We can see the distinction, evidently some people don't.

    This is no different to other hobbies some people may find odd or geeky - hobby horsing, ham radio, cosplay etc etc.

    • > We can see the distinction, evidently some people don't.

      > This is no different to other hobbies some people may find odd or geeky

      It is quite different, and you yourself explained why: some people can’t see the distinction between ChatGPT being a token generator or an intelligent friend. People aren’t talking about the latter being “odd or geeky” but being dangerous and harmful.

    • I would never get so invested in something I didn’t control.

      They may stop making new episodes of a favoured tv show, or writing new books, but the old ones will not suddenly disappear.

      How can you shut down cosplay? I guess you could pass a law banning ham radio or owning a horse, but that isn’t sudden in democratic countries, it takes months if not years.

      1 reply →

  • Wait until you see

    https://www.reddit.com/r/MyBoyfriendIsAI/

    They are very upset by the gpt5 model

Which is a bit frightening because a lot of the r/ChatGPT comments strike me as unhinged - it's like you would have thought that OpenAI murdered their puppy or something.

  • This is only going to get worse.

    Anyone that remembers the reaction when Sydney from Microsoft or more recently Maya from Sesame losing their respective 'personality' can easily see how product managers are going to have to start paying attention to the emotional impact of changing or shutting down models.

    • I think the fickle "personality" of these systems is a clue to how the entity supposedly possessing a personality doesn't really exist in the the first place.

      Stories are being performed at us, and we're encouraged to imagine characters have a durable existence.

      7 replies →

    • Or they could just do it whenever they want to for whatever reason they want to. They are not responsible for the mental health of their users. Their users are responsible for that themselves.

      4 replies →

  • Yeah it’s really bad over there. Like when a website changes its UI and people prefer the older look… except they’re acting like the old look was a personal friend who died.

    I think LLMs are amazing technology but we’re in for really weird times as people become attached to these things.

    • I mean, I don’t mind the Claude 3 funeral. It seems like it was a fun event.

      I’m less worried about the specific complaints about model deprecation, which can be ‘solved’ for those people by not deprecating the models (obviously costs the AI firms). I’m more worried about AI-induced psychosis.

      An analogy I saw recently that I liked: when a cat sees a laser pointer, it is a fun thing to chase. For dogs it is sometimes similar and sometimes it completely breaks the dog’s brain and the dog is never the same again. I feel like AI for us may be more like laser pointers for dogs, and some among us are just not prepared to handle these kinds of AI interactions in a healthy way.

      2 replies →

  • Considering how much d-listers can lose their shit over a puppet, I’m not surprised by anything.

I kind of agree with you as I wouldn't use LLMs for that.

But also, one cannot speak for everybody, if it's useful for someone on that context, why's that an issue?

  • Because more than any other phenomenon, LLMs are capable of bypassing natural human trust barriers. We ought to treat their output with significant detachment and objectivity, especially when they give personal advice or offer support. But especially for non-technical users, LLMs leap over the uncanny valley and create conversational attachment with their users.

    The conversational capabilities of these models directly engages people's relational wiring and easily fools many people into believing:

    (a) the thing on the other end of the chat is thinking/reasoning and is personally invested in the process (not merely autoregressive stochastic content generation / vector path following)

    (b) its opinions, thoughts, recommendations, and relational signals are the result of that reasoning, some level of personal investment, and a resulting mental state it has with regard to me, and thus

    (c) what it says is personally meaningful on a far higher level than the output of other types of compute (search engines, constraint solving, etc.)

    I'm sure any of us can mentally enumerate a lot of the resulting negative effects. Like social media, there's a temptation to replace important relational parts of life with engaging an LLM, as it always responds immediately with something that feels at least somewhat meaningful.

    But in my opinion the worst effect is that there's a temptation to turn to LLMs first when life trouble comes, instead of to family/friends/God/etc. I don't mean for help understanding a cancer diagnosis (no problem with that), but for support, understanding, reassurance, personal advice, and hope. In the very worst cases, people have been treating an LLM as a spiritual entity -- not unlike the ancient Oracle of Delphi -- and getting sucked deeply into some kind of spiritual engagement with it, and causing destruction to their real relationships as a result.

    A parallel problem is that just like people who know they're taking a placebo pill, even people who are aware of the completely impersonal underpinnings of LLMs can adopt a functional belief in some of the above (a)-(c), even if they really know better. That's the power of verbal conversation, and in my opinion, LLM vendors ought to respect that power far more than they have.

    • > We ought to treat their output with significant detachment and objectivity, especially when it gives personal advice or offers support.

      Eh, ChatGPT is inherently more trustworthy than average if simply because it will not leave, will not judge, it will not tire of you, has no ulterior motive, and if asked to check its work, has no ego.

      Does it care about you more than most people? Yes, by simply being not interested in hurting you, not needing anything from you, and being willing to not go away.

      3 replies →

  • Speaking for myself: the human mind does not seek truth or goodness, it primarily seeks satisfaction. That satisfaction happens in a context, and ever context is at least a little bit different.

    The scary part: It is very easy for LLMs to pick up someone's satisfaction context and feed it back to them. That can distort the original satisfaction context, and it may provide improper satisfaction (if a human did this, it might be called "joining a cult" or "emotional abuse" or "co-dependence").

    You may also hear this expressed as "wire-heading"

  • The issue is that people in general are very easy to fool into believing something harmful is helping them. If it was actually useful, it's not an issue. But just because someone believes it's useful doesn't mean it actually is.

  • Well, because in a worst case scenario, if the pilot of that big airliner decides to do ChatGPT therapy instead of a real one and then suicides while flying, also other people feel the consequences.

  • Because it's probably not great for one's mental health to pretend a statistical model is ones friend?

  • Whether the Hippocratic oath, the rules of the APA or any other organization, most all share "do no harm" as a core tenant.

    LLMs cannot conform to that rule because they cannot distinguish between good advice and enabling bad behavior.

    • The counter argument is that’s just a training problem, and IMO it’s a fair point. Neural nets are used as classifiers all the time; it’s reasonable that sufficient training data could produce a model that follows the professional standards of care in any situation you hand it.

      The real problem is that we can’t tell when or if we’ve reached that point. The risk of a malpractice suit influences how human doctors act. You can’t sue an LLM. It has no fear of losing its license.

      3 replies →

    • >LLMs cannot conform to that rule because they cannot distinguish between good advice and enabling bad behavior.

      I understand this as a precautionary approach that's fundamentally prioritizing the mitigation of bad outcomes and a valuable judgment to that end. But I also think the same statement can be viewed as the latest claim in the traditional debate of "computers can't do X." The credibility of those declarations is under more fire now than ever before.

      Regardless of whether you agree that it's perfect or that it can be in full alignment with human values as a matter of principle, at a bare minimum it can and does train to avoid various forms of harmful discourse, and obviously it has an impact judging from the voluminous reports and claims of noticeably different impact on user experience that models have depending on whether they do or don't have guardrails.

      So I don't mind it as a precautionary principle, but as an assessment of what computers are in principle capable of doing it might be selling them short.

    • Having an LLM as a friend or therapist would be like having a sociopath for those things -- not that an LLM is necessarily evil or antisocial, but they certainly meet the "lacks a sense of moral responsibility or social conscience" part of the definition.

Well, like, thats just your opinion man.

And probably close to wrong if we are looking at the sheer scale of use.

There is a bit of reality denial among anti-AI people. I thought about why people don't adjust to this new reality. I know one of my friends was anti-AI and seems to continue to be because his reputation is a bit based on proving he is smart. Another because their job is at risk.

Are all humans good friends and therapists?