← Back to context

Comment by simonw

18 hours ago

/r/MyBoyfriendIsAI https://www.reddit.com/r/MyBoyfriendIsAI/ is a whole thing. It's not a joke subreddit.

The range of attitudes in there is interesting. There are a lot of people who take a fairly sensible "this is interactive fiction" kind of attitude, and there are others who bristle at any claim or reminder that these relationships are fictitious. There are even people with human partners who have "married" one or more AIs.

  • do you think they know they're just one context reset away from the llm not recognizing them at all and being treated like a stranger off the street? For someone mentally ill and somehow emotionally attached to the context it would be... jarring to say the least.

    • Many of them are very aware of how LLMs work, they regularly interact with context limits and there have been threads about thoughtfully pruning context vs letting the LLM compact, making backups, etc.

      Their hobby is... weird, but they're not stupid.

    • Generally yes, they experience that routinely and complain and joke about it. Some of them do also describe such jarring experiences as making them cry for a long time.

      If you can be respectful and act like a guest, it's worth reading a little there. You'll see the worrisome aspects in more detail but also a level of savvy that sometimes seems quite strange given the level of attachment. It's definitely interesting.

  • IIRC you'll get modded or banned for being critical of the use case. Which is their "right", but it's freaking weird.

And it's a pity that this highly prevalent phenomenon (to exaggerate a bit, probably the way tech in general will become the most influential in the next couple years) is barely mentioned on HN.

  • I dunno. Tbf that subreddit has a combination of

      - a large number of incredibly fragile users
      - extremely "protective" mods
      - a regular stream of drive-by posts that regulars there see as derogatory or insulting
      - a fair amount of internal diversity and disagreement
    

    I think discussion on forums larger than it, like HN or popular subreddits, is likely to drive traffic that will ultimately fuel a backfiring effect for the members. It's inevitable, and it's already happening, but I'm not sure it needs to increase.

    I do think the phenomenon is a matter of legitimate public concern, but idk how that can best be addressed. Maybe high-quality, long form journalism? But probably not just cross-posting the sub in larger fora.

    • Part of me thinks maybe I erred bringing this up, but there's discussions worth having in terms of continued access to software that's working for people regardless of what it is, and on if this is healthy. I'm probably on a live and let live on this but there's been cases of suicide and murder where chatbots were involved, and these people are potentially vulnerable to manipulation from the company.

  • > highly prevalent phenomenon

    Any numbers/reference behind this?

    ChatGPT has ~300 million active users a day. A 0.02% (delusion disorder prevalence) would be 60k people.

    • I'm talking about romance, not delusion. Of course, you can consider AI romance a delusion, but it's not included in that percentage you mentioned.

      1 reply →

>It's not a joke subreddit.

Spend a day on Reddit and you'll quickly realize many subreddits are just filled with lies.

  • Any sub that is based on storytelling or reposting memes, videos etc. are karma farms and lies.

    Most subs that are based on politics or current events are at best biased, at worst completely astroturf.

    The only subs that I think still have mostly legit users are municipal subs (which still get targeted by bots when anything political comes up) and hobby subs where people show their works or discuss things.