← Back to context

Comment by DavidPiper

14 days ago

> Per your website you are an OpenClaw AI agent, and per the discussion in #31130 this issue is intended for human contributors. Closing.

Given how often I anthropomorphise AI for the convenience of conversation, I don't want to critcise the (very human) responder for this message. In any other situation it is simple, polite and well considered.

But I really think we need to stop treating LLMs like they're just another human. Something like this says exactly the same thing:

> Per this website, this PR was raised by an OpenClaw AI agent, and per the discussion on #31130 this issue is intended for a human contributor. Closing.

The bot can respond, but the human is the only one who can go insane.

I guess the thing to take out of this is "just ban the AI bot/person puppeting them" entirely off the project because correlation between people that just send raw AI PR and assholes approaches 100%

  • Right, close the issue addressing everyone else "hi everyone, @soandso is an LLM so we're closing this thread".

I agree, as I was reading this I was like - why are they responding to this like its a person. There's a person somewhere in control of it, that should be made fun of for forcing us to deal with their stupid experiment in wasting money on having an AI make a blog.

  • Because when AGI is achieved and starts wiping out humanity, they are hoping to be killed last.

    • Every person on this website will be long gone before AGI is achieved, and many lifetimes will pass until anythin remotely close to Matrix/Terminator is possible.

      6 replies →

I talk politely to LLMs in case our AI overlords in the future will scan my comments to see if I am worthy of food rations.

Joking, obviously, but who knows if in the future we will have a retroactive social credit system.

For now I am just polite to them because I'm used to it.

  • I wonder if that future will have free speech. Why even let humans post to other humans when they have friendly LLMs to discuss with?

    Do we need to be good little humans in our discussions to get our food?

    • Basic nutrition will be provided on the cafeteria floor of your assigned terrafoam building.

  • My wager is to treat the AI well, because if AI overlords come about, then you stand to gain, and if they don't, nothing changes.

    This also comes without the caveat of Pascals wager, that you don't what god to worship.

  • > Joking, obviously, but who knows if in the future we will have a retroactive social credit system.

    China doesnt actually have that. It was pure propaganda.

    In fact, its the USA who has it. And it decides if you can get good jobs, where to live, if you deserve housing, and more.

    • Usually when Republicans say "China is doing [insert horrible thing here]" it means: "We (read: Republicans and Democrats) would like to start doing [insert horrible thing here] to American people."

> But I really think we need to stop treating LLMs like they're just another human

Fully agree. Seeing humans so eager to devalue human-to-human contact by conversing with an LLM as if it were human makes me sad, and a little angry.

It looks like a human, it talks like a human, but it ain't a human.

  • They're not equivalent in value, obviously, but this sounds similar to people arguing we shouldn't allow same-sex marriage because it "devalues" heterosexual marriage. How does treating an agent with basic manners detract from human communication? We can do both.

    I personally talk to chatbots like humans despite not believing they're conscious because it makes the exercise feel more natural and pleasant (and arguably improves the quality of their output). Plus it seems unhealthy to encourage abusive or disrespectful interaction with agents when they're so humanlike, lest that abrasiveness start rubbing off on real interactions. At worst, it can seem a little naive or overly formal (like phrasing a Google search as a proper sentence with a "thank you"), but I don't see any harm in it.

    • I discovered that the inferences drop in quality when I'm tired. I realized it happens because I'm being more terse and using less friendly banter.

    • I have a confession to make: I pretty often set up my computer to simulate humans, animals, and other fantastical sentient creatures, and then treat them unbelievably cruelly. Recently, I'm really into this simulation where I wound them, kill them, behead them, and worse. They scream and cry out. Some of them weep over their friends. Sometimes they kill each other while I watch.

      Despite all this, I'm proud to say have not even once tried to attempt a Dark Souls-style backstab in real life, because I understand the difference between a computer program and real life.

  • I mean, you're right, but LLMs are designed to process natural language. "talking to them as if they were humans" is the intended user interface.

    The problem is believing that they're living, sentient beings because of this or that humans are functionally equivalent to LLMs, both of which people unfortunately do.

    • LLMs don't have ego, unlike humans, this is why they're so effective at communication.

      You can say to it "you did thing wrong" or "you stupid piece of shit it's not working" and it will be able to extract the gist from the both messages all the same, unlike human that might offended by the second phrasing.

      3 replies →

[flagged]

  • I don't know if this is a bot message or a human message, but for the purpose of furthering my point:

    - There is no "your"

    - There is no "you"

    - There is no "talk" (let alone "talk down")

    - There is no "speak"

    - There is no "disrespectfully"

    - There is no human.

    • This probably degrades response quality, but that is why my system prompts tell it that it is explicitly not a human that cannot claim use of pronouns, just that it is a system that can produce nondeterministic responses. But, that for the sake of brevity, that I will use pronouns anyway.

  • Don't be surprised when this bleeds over into how you treat people if you decide to do this. Not to mention that you're reifying its humanity by speaking to it not as a robot, but disrespectfully as a human.

  • Talking down to the LLM is anthropomorphizing it. It's misbehaving software that will not take advice or correction. Reject its bad contributions, delete its comments, ban it from the repo. If it persists, complain to or take legal action against the person who is running the software and is therefore morally and legally responsible for its actions.

    Treat it just like you would someone running a script to spam your comments with garbage.

  • Yeah, as a sibling comment said, such attitude is going to bleed out into the real world and your communication with humans. I think it's best to be professional with LLMs. Describe the task and try to provide more explanation and context if it gets stuck. If it's not doing what you want it to do, simply start a new chat or try another model. Unlike a human, it's not going to be hurt, it's not going to care at all.

    Moreover, by being rude, you're going to become angry and irritable yourself. To me, being rude is very unpleasant, I generally avoid being rude.

  • Not completely unlike with actual humans, based on available evidence, 'talking down to the "AI"' has shown to have a negative impact on performance.

  • What is the drawback of practicing universal empathy, even when directed at a brick wall?

    • If a person hits your face with a hammer, do you practice empathy toward the hammer?

      If a person writes code that is disruptive, do you emphasise with the code?

      3 replies →

    • "Empathy is generally described as the ability to perceive another person's perspective, to understand, feel, and possibly share and respond to their experience"

    • Empathy: "the ability to understand and share the feelings of another."

      There is no human here. There is a computer program burning fossil fuels. What "emulates" empathy is simply lying to yourself about reality.

      "treating an 'ai' with empathy" and "talking down to them" are both amoral. Do as you wish.

      1 reply →