← Back to context

Comment by katabasis

1 day ago

LLMs are not people, but I can imagine how extensive interactions with AI personas might alter the expectations that humans have when communicating with other humans.

Real people would not (and should not) allow themselves to be subjected to endless streams of abuse in a conversation. Giving AIs like Claude a way to end these kinds of interactions seems like a useful reminder to the human on the other side.

This post seems to explicitly state they are doing this out of concern for the model's "well-being," not the user's.

  • Yeah, but my interpretation of what the user you’re replying to is saying is that these LLMs are more and more going to be teaching people how it is acceptable to communicate with others.

    Even if the idea that LLMs are sentient may be ridiculous atm, the concept of not normalizing abusive forms of communication with others, be they artificial or not, could be valuable for society.

    It’s funny because this is making me think of a freelance client I had recently who at a point of frustration between us began talking to me like I was an AI assistant. Just like you see frustrated people talk to their LLMs. I’d never experienced anything like it, and I quickly ended the relationship, but I know that he was deep into using LLMs to vibe code every day and I genuinely believe that some of that began to transfer over to the way he felt he could communicate with people.

    Now an obvious retort here is to question whether killing NPCs in video games tends to make people feel like it’s okay to kill people IRL.

    My response to that is that I think LLMs are far more insidious, and are tapping into people’s psyches in a way no other tech has been able to dream of doing. See AI psychosis, people falling in love with their AI, the massive outcry over the loss of personality from gpt4o to gpt5… I think people really are struggling to keep in mind that LLMs are not a genuine type of “person”.

    • > It’s funny because this is making me think of a freelance client I had recently who at a point of frustration between us began talking to me like I was an AI assistant. Just like you see frustrated people talk to their LLMs.

      I witness a very similar event. It's important to stay vigilant and not let the "assistant" reprogram your speech patterns.

    • Yeah pretty much this. One can argue that it’s idiotic to treat chatbots like they are alive, but if a bit of misplaced empathy for machines helps to discourage antisocial behavior towards other humans (even as an unintentional side effect), that seems ok to me.

      As an aside, I’m not the kind of person who gets worked up about violence in video games, because even AAA titles with excellent graphics are still obvious as games. New forms of technology are capable of blurring the lines between fantasy and reality to a greater degree. This is true of LLM chat bots to some degree, and I worry it will also become a problem as we get better VR. People who witness or participate in violent events often come away traumatized; at a certain point simulated experiences are going to be so convincing that we will need to worry about the impact on the user.

      1 reply →

    • Yes, this is exactly the reason I taught my kids to be polite to Alexa. Not because anyone thinks Alexa is sentient, but because it's a good habit to have.

      1 reply →

  • This is like saying I am hurting a real person when I try to crop a photo in an image editor.

    Either come out and say whole of electron field is conscious, but then is that field "suffering" as it is hot in the sun.