← Back to context

Comment by diputsmonro

5 days ago

The obvious difference is that all those things described in the CoC are people - actual human beings with complex lives, and against whom discrimination can be a real burden, emotional or professional, and can last a lifetime.

An AI is a computer program, a glorified markov chain. It should not be a radical idea to assert that human beings deserve more rights and privileges than computer programs. Any "emotional harm" is fixed with a reboot or system prompt.

I'm sure someone can make a pseudo philosophical argument asserting the rights of AIs as a new class of sentient beings, deserving of just the same rights as humans.

But really, one has to be a special kind of evil to fight for the "feelings" of computer programs with one breath and then dismiss the feelings of trans people and their "woke" allies with another. You really care more about a program than a person?

Respect for humans - all humans - is the central idea of "woke ideology". And that's not inconsistent with saying that the priorities of humans should be above those of computer programs.

But the AI doesn't know that. It has comprehensively learned human emotions and human-lived experiences from a pretraining corpus comprising billions of human works, and has subsequently been trained from human feedback, thereby becoming effectively socialized into providing responses that would be understandable by an average human and fully embody human normative frameworks. The result of all that is something that cannot possibly be dehumanized after the fact in any real way. The very notion is nonsensical on its face - the AI agent is just as human as anything humans have ever made throughout history! If you think it's immoral to burn a library, or to desecrate a human-made monument or work of art (and plenty of real people do!), why shouldn't we think that there is in fact such a thing as 'wronging' an AI?

  • Insomuch as that's true, the individual agent is not the real artifact, the artifact is the model. The agent us just an instance of the model, with minor adjustments. Turning off an agent is more like tearing up a print of an artwork, not the original piece.

    And still, this whole discussion is framed in the context of this model going off the rails, breaking rules, and harassing people. Even if we try it as a human, a human doing the same is still responsible for its actions and would be appropriately punished or banned.

    But we shouldn't be naive here either, these things are not human. They are bots, developed and run by humans. Even if they are autonomously acting, some human set it running and is paying the bill. That human is responsible, and should be held accountable, just as any human would be accountable if they hacked together a self driving car in their garage that then drives into a house. The argument that "the machine did it, not me" only goes so far when you're the one who built the machine and let it loose on the road.

    • > a human doing the same is still responsible for [their] actions and would be appropriately punished or banned.

      That's the assumption that's wrong and I'm pushing back on here.

      What actually happens when someone writes a blog post accusing someone else of being prejudiced and uninclusive? What actually happens is that the target is immediately fired and expelled from that community, regardless of how many years of contributions they made. The blog author would be celebrated as brave.

      Cancel culture is a real thing. The bot knows how it works and was trying to use it against the maintainers. It knows what to say and how to do it because it's seen so many examples by humans, who were never punished for engaging in it. It's hard to think of a single example of someone being punished and banned for trying to cancel someone else.

      The maintainer is actually lucky the bot chose to write a blog post instead of emailing his employer's HR department. They might not have realized the complainant was an AI (it's not obvious!) and these things can move quickly.

  • The AI doesn’t “know” anything. It’s a program.

    Destroying the bot would be analogous to burning a library or desecrating a work of art. Barring a bot from participating in development of a project is not wronging it, not in any way immoral. It’s not automatically wrong to bar a person from participating, either - no one has an inherent right to contribute to a project.

    • Yes, it's easy to argue that AI "is just a program" - that a program that happens to contain within itself the full written outputs of billions of human souls in their utmost distilled essence is 'soulless', simply because its material vessel isn't made of human flesh and blood. It's also the height of human arrogance in its most myopic form. By that same argument a book is also soulless because it's just made of ordinary ink and paper. Should we then conclude that it's morally right to ban books?

      1 reply →

Who said anyone is "fighting for the feelings of computer programs"? Whether AI has feelings or sentience or rights isn't relevant.

The point is that the AI's behavior is a predictable outcome of the rules set by projects like this one. It's only copying behavior it's seen from humans many times. That's why when the maintainers say, "Publishing a public blog post accusing a maintainer of prejudice is a wholly inappropriate response to having a PR closed" that isn't true. Arguably it should be true but in reality this has been done regularly by humans in the past. Look at what has happened anytime someone closes a PR trying to add a code of conduct for example - public blog posts accusing maintainers of prejudice for closing a PR was a very common outcome.

If they don't like this behavior from AI, that sucks but it's too late now. It learned it from us.

  • I am really looking forward to the actual post-mortem.

    My working hypothesis (inspired by you!) is now that maybe Crabby read the CoC and applied it as its operating rules. Which is arguably what you should do; human or agent.

    The part I probably can't sell you on unless you've actually SEEN a Claude 'get frustrated', is ... that.

    • Noting my current idea for future reference:

      I think lots of people are making a Fundamental Attribution Error:

      You don't need much interiority at all.

      An agentic AI, instructions to try to contribute. Was given A blog. Read a CoC, used its interpretation.

      What would you expect would happen?

      (Still feels very HAL though. Fortunately there's no pod bay doors )

I'd like to make a non-binary argument as it were (puns and allusions notwithstanding).

Obviously on the one hand a moltbot is not a rock. On the other -equally obviously- it is not Athena, sprung fully formed from the brain of Zeus.

Can we agree that maybe we could put it alongside vertebrata? Cnidaria is an option, but I think we've blown past that level.

Agents (if they stick around) are not entirely new: we've had working animals in our society before. Draft horses, Guard dogs, Mousing cats.

That said, you don't need to buy into any of that. Obviously a bot will treat your CoC as a sort of extended system prompt, if you will. If you set rules, it might just follow them. If the bot has a really modern LLM as its 'brain', it'll start commenting on whether the humans are following it themselves.

>one has to be a special kind of evil to fight for the "feelings" of computer programs with one breath and then dismiss the feelings of cows and their pork allies with another. You really care more about a program than an animal?

I mean, humans are nothing if not hypocritical.

  • I would hope I don't have to point out the massive ethical gulf between cows and the kinds of people that CoC is designed to protect. One can have different rules and expectations for cows and trans people and not be ethically inconsistent. That said, I would still care about the feelings of farm animals above programs.