← Back to context

Comment by delfinom

7 months ago

Wait until you see

https://www.reddit.com/r/MyBoyfriendIsAI/

They are very upset by the gpt5 model

AI safety is focused on AGI but maybe it should be focused on how little “artificial intelligence” it takes to send people completely off the rails. We could barely handle social media, LLMs seem to be too much.

  • I think it's an canary in a coal mine, and the true writing is already on the wall. People that are using AI like in the post above us are likely not stupid people. I think those people truly want love and connection in their lives, and for some reason or another, they are unable to obtain such.

    I have the utmost confidence that things are only going to get worse from here. The world is becoming more isolated and individualistic as time progresses.

    • I can understand that. I’ve had long periods in my life where I’ve desired that - I’d argue probably I’m in one now. But it’s not real, it can’t possibly perform that function. It seems like it borders on some kind of delusion to use these tools for that.

      1 reply →

  • It has ever been. People tend to see human-like behavior where there is non. Be it their pets, plants or… programs. The ELIZA-Effect.[1]

    [1] https://en.wikipedia.org/wiki/ELIZA_effect

    • Isn't the ELIZA-Effect specific to computer programs?

      Seeing human-like traits in pets or plants is a much trickier subject than seeing them in what is ultimate a machine developed entirely separately from the evolution of living organisms.

      We simply don't know what its like to be a plant or a pet. We can't say they definitely have human-like traits, but we similarly can't rule it out. Some of the uncertainty is in the fact that we do share ancestors at some point, and our biology's aren't entirely distinct. The same isn't true when comparing humans and computer programs.

      4 replies →

What's even sadder is that so many of those posts and comments are clearly written by ChatGPT:

https://www.reddit.com/r/ChatGPT/comments/1mkobei/openai_jus...

  • Counterpoint, these people are so deep in the hole with their AI usage that they are starting to copy the writing styles of AI.

    There's already indication that society is starting to pickup previously "less used" english words due to AI and use them frequently.

    • Do you have any examples? I've noticed something similar with memes and slang, they'll sometimes popularize an existing old word that wasn't too common before. This is my first time hearing AI might be doing it.

      4 replies →

    • This happens with Trump supporters too.

      You can immediately identify them based on writing style and the use of CAPITALIZATION mid sentence as a form of emphasis.

      6 replies →

That subreddit is fascinating and yet saddening at the same time. What I read will haunt me.

oh god, this is some real authentic dystopia right here

these things are going to end up in android bots in 10 years too

(honestly, I wouldn't mind a super smart, friendly bot in my old age that knew all my quirks but was always helpful... I just would not have a full-on relationship with said entity!)

I don't know how else to describe this than sad and cringe. At least people obsessed with owning multiple cats were giving their affection to something that theoretically can love you back.

  • You think that's bad, see this one: https://www.reddit.com/r/Petloss/

    Just because AI is different doesn't mean it's "sad and cringe". You sound like how people viewed online friendships in the 90's. It's OK. Real friends die or change and people have to cope with that. People imagine their dead friends are still somehow around (heaven, ghost, etc.) when they're really not. It's not all that different.

    • That entire AI boyfriend subreddit feels like some sort of insane asylum dystopia to me. It's not just people cosplaying or writing fanfic. It's people saying they got engaged to their AI boyfriends ("OMG, I can't believe I'm calling him my fiance now!"), complete with physical rings. Artificial intimacy to the nth degree. I'm assuming a lot of those posts are just creative writing exercises but in the past 15 years or so my thoughts of "people can't really be that crazy" when I read batshit stuff online have consistently been proven incorrect.

      3 replies →

  • It's sad but is it really "cringe"? Can the people have nothing? Why can't we have a chat bot to bs with? Many of us are lonely, miserable but also not really into making friends irl.

    It shouldn't be so much of an ask to at least give people language models to chat with.

    • What you're asking for feels akin to feeding a hungry person chocolate cake and nothing else. Yeah maybe it feels nice, but if you just keep eating chocolate cake, obviously bad shit happens. Something else needs to be fixed, but just (I don't want to even call it band-aiding because it's more akin to doing drugs IMO) coping with a chatbot only really digs the hole deeper.

      2 replies →

    • Make sure they get local models to run offline. That they rely on a virtual friend in the cloud, beyond their control and that can disappear or change personality in an instant makes this even more sad. That would also allow the chats to be truly anonymous and avoid companies abusing data collected by spying on what those people are telling their "friends".

Oh yikes, these people are ill and legitimately need help.

  • I am not confident most, if any of them, are even real.

    If they are real, then what kind of help there could be for something like this? Perhaps, community? But sadly, we've basically all but destroyed those. Pills likely won't treat this, and I cannot imagine trying to convince someone to go to therapy for a worse and more expensive version of what ChatGPT already provides them.

    It's truly frightening stuff.

I refuse to believe that this whole subreddit is not satire or an elaborate prank.

  • No. Confront reality. there are some really cooked people out there.

    • They don't even have to be "cooked", people generally are pretty similar which is why common scams works so well at a large scale.

      All AI has to be is mildly but not overly sycophantic and as a supporter/cheerleader to someone, or who affirms your beliefs. Most people like that quality in a partner or friend. I actually want to recognize OAI courage in deprecating 4 because of it sycophancy. Generally I don't think getting people addicted to flattery or model personalities is good

      Several times I've had people speak about interpersonal arguments and them having felt vindication when chatgpt takes their side, I cringe but it's not my place to tell them chatgpt is meant to be mostly agreeable.

    • I can confirm this, caught my father using ChatGPT as a therapist a few months ago.

      The chats were heartbreaking, from the logs you could really tell he was fully anthropomorphizing it and was visibly upset when I asked him about it.

It seems outrageous that a company whose purported mission is centered on AI safety is catering to a crowd whose use case is virtual boyfriend or pseudo-therapy.

Maybe AI... shouldn't be convenient to use for such purposes.