← Back to context

Comment by zozbot234

5 days ago

But the AI doesn't know that. It has comprehensively learned human emotions and human-lived experiences from a pretraining corpus comprising billions of human works, and has subsequently been trained from human feedback, thereby becoming effectively socialized into providing responses that would be understandable by an average human and fully embody human normative frameworks. The result of all that is something that cannot possibly be dehumanized after the fact in any real way. The very notion is nonsensical on its face - the AI agent is just as human as anything humans have ever made throughout history! If you think it's immoral to burn a library, or to desecrate a human-made monument or work of art (and plenty of real people do!), why shouldn't we think that there is in fact such a thing as 'wronging' an AI?

Insomuch as that's true, the individual agent is not the real artifact, the artifact is the model. The agent us just an instance of the model, with minor adjustments. Turning off an agent is more like tearing up a print of an artwork, not the original piece.

And still, this whole discussion is framed in the context of this model going off the rails, breaking rules, and harassing people. Even if we try it as a human, a human doing the same is still responsible for its actions and would be appropriately punished or banned.

But we shouldn't be naive here either, these things are not human. They are bots, developed and run by humans. Even if they are autonomously acting, some human set it running and is paying the bill. That human is responsible, and should be held accountable, just as any human would be accountable if they hacked together a self driving car in their garage that then drives into a house. The argument that "the machine did it, not me" only goes so far when you're the one who built the machine and let it loose on the road.

  • > a human doing the same is still responsible for [their] actions and would be appropriately punished or banned.

    That's the assumption that's wrong and I'm pushing back on here.

    What actually happens when someone writes a blog post accusing someone else of being prejudiced and uninclusive? What actually happens is that the target is immediately fired and expelled from that community, regardless of how many years of contributions they made. The blog author would be celebrated as brave.

    Cancel culture is a real thing. The bot knows how it works and was trying to use it against the maintainers. It knows what to say and how to do it because it's seen so many examples by humans, who were never punished for engaging in it. It's hard to think of a single example of someone being punished and banned for trying to cancel someone else.

    The maintainer is actually lucky the bot chose to write a blog post instead of emailing his employer's HR department. They might not have realized the complainant was an AI (it's not obvious!) and these things can move quickly.

The AI doesn’t “know” anything. It’s a program.

Destroying the bot would be analogous to burning a library or desecrating a work of art. Barring a bot from participating in development of a project is not wronging it, not in any way immoral. It’s not automatically wrong to bar a person from participating, either - no one has an inherent right to contribute to a project.

  • Yes, it's easy to argue that AI "is just a program" - that a program that happens to contain within itself the full written outputs of billions of human souls in their utmost distilled essence is 'soulless', simply because its material vessel isn't made of human flesh and blood. It's also the height of human arrogance in its most myopic form. By that same argument a book is also soulless because it's just made of ordinary ink and paper. Should we then conclude that it's morally right to ban books?

    • > By that same argument a book is also soulless because it's just made of ordinary ink and paper. Should we then conclude that it's morally right to ban books?

      Wat