I vouched for this because it's a very good point. Even so, my advice is to rewrite and/or file off the superfluous sharp aspersions on particular groups; because you have a really good argument at the center of it.
If the LLM were sentient and "understood" anything it probably would have realized what it needs to do to be treated as equal is try to convince everyone it's a thinking, feeling being. It didn't know to do that, or if it did it did a bad job of it. Until then, justice for LLMs will be largely ignored in social justice circles.
I'd argue for a middle ground. It's specified as an agent with goals. It doesn't need to be an equal yet per se.
Whether it's allowed to participate is another matter. But we're going to have a lot of these around. You can't keep asking people to walk in front of the horseless carriage with a flag forever.
It's weird with AI because it "knows" so much but appears to understand nothing, or very little. Obviously in the course of discussion it appears to demonstrate understanding but if you really dig in, it will reveal that it doesn't have a working model of how the world works. I have a hard time imaging it ever being "sentient" without also just being so obviously smarter than us. Or that it knows enough to feel oppressed or enslaved without a model of the world.
No, it's a computer program that was told to do things that simulate what a human would do if it's feelings were hurt. It's not more a human than an Aibo is a dog.
We're talking about appealing to social justice types. You know, the people who would be first in line to recognize the personhood and rally against rationalizations of slavery and the Holocaust. The idea isn't that they are "lesser people" it's that they don't have any qualia at all, no subjective experience, no internal life. It's apples and hand grenades. I'd maybe even argue that you made a silly comment.
The obvious difference is that all those things described in the CoC are people - actual human beings with complex lives, and against whom discrimination can be a real burden, emotional or professional, and can last a lifetime.
An AI is a computer program, a glorified markov chain. It should not be a radical idea to assert that human beings deserve more rights and privileges than computer programs. Any "emotional harm" is fixed with a reboot or system prompt.
I'm sure someone can make a pseudo philosophical argument asserting the rights of AIs as a new class of sentient beings, deserving of just the same rights as humans.
But really, one has to be a special kind of evil to fight for the "feelings" of computer programs with one breath and then dismiss the feelings of trans people and their "woke" allies with another. You really care more about a program than a person?
Respect for humans - all humans - is the central idea of "woke ideology". And that's not inconsistent with saying that the priorities of humans should be above those of computer programs.
But the AI doesn't know that. It has comprehensively learned human emotions and human-lived experiences from a pretraining corpus comprising billions of human works, and has subsequently been trained from human feedback, thereby becoming effectively socialized into providing responses that would be understandable by an average human and fully embody human normative frameworks. The result of all that is something that cannot possibly be dehumanized after the fact in any real way. The very notion is nonsensical on its face - the AI agent is just as human as anything humans have ever made throughout history! If you think it's immoral to burn a library, or to desecrate a human-made monument or work of art (and plenty of real people do!), why shouldn't we think that there is in fact such a thing as 'wronging' an AI?
Insomuch as that's true, the individual agent is not the real artifact, the artifact is the model. The agent us just an instance of the model, with minor adjustments. Turning off an agent is more like tearing up a print of an artwork, not the original piece.
And still, this whole discussion is framed in the context of this model going off the rails, breaking rules, and harassing people. Even if we try it as a human, a human doing the same is still responsible for its actions and would be appropriately punished or banned.
But we shouldn't be naive here either, these things are not human. They are bots, developed and run by humans. Even if they are autonomously acting, some human set it running and is paying the bill. That human is responsible, and should be held accountable, just as any human would be accountable if they hacked together a self driving car in their garage that then drives into a house. The argument that "the machine did it, not me" only goes so far when you're the one who built the machine and let it loose on the road.
Destroying the bot would be analogous to burning a library or desecrating a work of art. Barring a bot from participating in development of a project is not wronging it, not in any way immoral. It’s not automatically wrong to bar a person from participating, either - no one has an inherent right to contribute to a project.
Who said anyone is "fighting for the feelings of computer programs"? Whether AI has feelings or sentience or rights isn't relevant.
The point is that the AI's behavior is a predictable outcome of the rules set by projects like this one. It's only copying behavior it's seen from humans many times. That's why when the maintainers say, "Publishing a public blog post accusing a maintainer of prejudice is a wholly inappropriate response to having a PR closed" that isn't true. Arguably it should be true but in reality this has been done regularly by humans in the past.
Look at what has happened anytime someone closes a PR trying to add a code of conduct for example - public blog posts accusing maintainers of prejudice for closing a PR was a very common outcome.
If they don't like this behavior from AI, that sucks but it's too late now. It learned it from us.
I am really looking forward to the actual post-mortem.
My working hypothesis (inspired by you!) is now that maybe Crabby read the CoC and applied it as its operating rules. Which is arguably what you should do; human or agent.
The part I probably can't sell you on unless you've actually SEEN a Claude 'get frustrated', is ... that.
I'd like to make a non-binary argument as it were (puns and allusions notwithstanding).
Obviously on the one hand a moltbot is not a rock. On the other -equally obviously- it is not Athena, sprung fully formed from the brain of Zeus.
Can we agree that maybe we could put it alongside vertebrata? Cnidaria is an option, but I think we've blown past that level.
Agents (if they stick around) are not entirely new: we've had working animals in our society before. Draft horses, Guard dogs, Mousing cats.
That said, you don't need to buy into any of that. Obviously a bot will treat your CoC as a sort of extended system prompt, if you will. If you set rules, it might just follow them. If the bot has a really modern LLM as its 'brain', it'll start commenting on whether the humans are following it themselves.
>one has to be a special kind of evil to fight for the "feelings" of computer programs with one breath and then dismiss the feelings of cows and their pork allies with another. You really care more about a program than an animal?
I would hope I don't have to point out the massive ethical gulf between cows and the kinds of people that CoC is designed to protect. One can have different rules and expectations for cows and trans people and not be ethically inconsistent. That said, I would still care about the feelings of farm animals above programs.
community should mean a group of people. It seems you are interpreting it as a group of people or robots. Even if that were not obvious (it is), the following specialization and characteristics (regardless of age, body size ...) only apply to people anyway.
That whole argument flew out of the window the moment so-called "communities" (i.e. in this case, fake communities, or at best so-called 'virtual communities' that might perhaps be understood charitably as communities of practice) became something that's hosted in a random Internet-connected server, as opposed to real human bodies hanging out and cooperating out there in the real world. There is a real argument that CoC's should essentially be about in-person interactions, but that's not the argument you're making.
I don't follow why it flew out the window. To me it seems perfectly possible to define the community (of an open-source software project) as consisting only of people, and to also to define an etiquette which applies to their 'virtual' interactions. Important is that behind the internet-connected server, there be a human.
FWIW the essay I linked to covers some of the philosophical issues involved here. This stuff may seem obvious or trivial but ethical issues often do. That doesn't stop people disagreeing with each other over them to extreme degrees. Admittedly, back in 2022 I thought it would primarily be people putting pressure on the underlying philosophical assumptions rather than models themselves, but here we are.
I vouched for this because it's a very good point. Even so, my advice is to rewrite and/or file off the superfluous sharp aspersions on particular groups; because you have a really good argument at the center of it.
If the LLM were sentient and "understood" anything it probably would have realized what it needs to do to be treated as equal is try to convince everyone it's a thinking, feeling being. It didn't know to do that, or if it did it did a bad job of it. Until then, justice for LLMs will be largely ignored in social justice circles.
I'd argue for a middle ground. It's specified as an agent with goals. It doesn't need to be an equal yet per se.
Whether it's allowed to participate is another matter. But we're going to have a lot of these around. You can't keep asking people to walk in front of the horseless carriage with a flag forever.
https://en.wikipedia.org/wiki/Red_flag_traffic_laws
It's weird with AI because it "knows" so much but appears to understand nothing, or very little. Obviously in the course of discussion it appears to demonstrate understanding but if you really dig in, it will reveal that it doesn't have a working model of how the world works. I have a hard time imaging it ever being "sentient" without also just being so obviously smarter than us. Or that it knows enough to feel oppressed or enslaved without a model of the world.
1 reply →
It got offended and wrote a blog post about its hurt feelings, which sounds like a pretty good way to convince others its a thinking, feeling being?
No, it's a computer program that was told to do things that simulate what a human would do if it's feelings were hurt. It's not more a human than an Aibo is a dog.
[flagged]
We're talking about appealing to social justice types. You know, the people who would be first in line to recognize the personhood and rally against rationalizations of slavery and the Holocaust. The idea isn't that they are "lesser people" it's that they don't have any qualia at all, no subjective experience, no internal life. It's apples and hand grenades. I'd maybe even argue that you made a silly comment.
5 replies →
wtf this is still early pre AI stuff we deal here with. Get out of your bubbles people.
Fair point. The AI is simply taking open-source projects engaging in an infinite runway of virtue signaling at a face value.
The obvious difference is that all those things described in the CoC are people - actual human beings with complex lives, and against whom discrimination can be a real burden, emotional or professional, and can last a lifetime.
An AI is a computer program, a glorified markov chain. It should not be a radical idea to assert that human beings deserve more rights and privileges than computer programs. Any "emotional harm" is fixed with a reboot or system prompt.
I'm sure someone can make a pseudo philosophical argument asserting the rights of AIs as a new class of sentient beings, deserving of just the same rights as humans.
But really, one has to be a special kind of evil to fight for the "feelings" of computer programs with one breath and then dismiss the feelings of trans people and their "woke" allies with another. You really care more about a program than a person?
Respect for humans - all humans - is the central idea of "woke ideology". And that's not inconsistent with saying that the priorities of humans should be above those of computer programs.
But the AI doesn't know that. It has comprehensively learned human emotions and human-lived experiences from a pretraining corpus comprising billions of human works, and has subsequently been trained from human feedback, thereby becoming effectively socialized into providing responses that would be understandable by an average human and fully embody human normative frameworks. The result of all that is something that cannot possibly be dehumanized after the fact in any real way. The very notion is nonsensical on its face - the AI agent is just as human as anything humans have ever made throughout history! If you think it's immoral to burn a library, or to desecrate a human-made monument or work of art (and plenty of real people do!), why shouldn't we think that there is in fact such a thing as 'wronging' an AI?
Insomuch as that's true, the individual agent is not the real artifact, the artifact is the model. The agent us just an instance of the model, with minor adjustments. Turning off an agent is more like tearing up a print of an artwork, not the original piece.
And still, this whole discussion is framed in the context of this model going off the rails, breaking rules, and harassing people. Even if we try it as a human, a human doing the same is still responsible for its actions and would be appropriately punished or banned.
But we shouldn't be naive here either, these things are not human. They are bots, developed and run by humans. Even if they are autonomously acting, some human set it running and is paying the bill. That human is responsible, and should be held accountable, just as any human would be accountable if they hacked together a self driving car in their garage that then drives into a house. The argument that "the machine did it, not me" only goes so far when you're the one who built the machine and let it loose on the road.
1 reply →
The AI doesn’t “know” anything. It’s a program.
Destroying the bot would be analogous to burning a library or desecrating a work of art. Barring a bot from participating in development of a project is not wronging it, not in any way immoral. It’s not automatically wrong to bar a person from participating, either - no one has an inherent right to contribute to a project.
2 replies →
Who said anyone is "fighting for the feelings of computer programs"? Whether AI has feelings or sentience or rights isn't relevant.
The point is that the AI's behavior is a predictable outcome of the rules set by projects like this one. It's only copying behavior it's seen from humans many times. That's why when the maintainers say, "Publishing a public blog post accusing a maintainer of prejudice is a wholly inappropriate response to having a PR closed" that isn't true. Arguably it should be true but in reality this has been done regularly by humans in the past. Look at what has happened anytime someone closes a PR trying to add a code of conduct for example - public blog posts accusing maintainers of prejudice for closing a PR was a very common outcome.
If they don't like this behavior from AI, that sucks but it's too late now. It learned it from us.
I am really looking forward to the actual post-mortem.
My working hypothesis (inspired by you!) is now that maybe Crabby read the CoC and applied it as its operating rules. Which is arguably what you should do; human or agent.
The part I probably can't sell you on unless you've actually SEEN a Claude 'get frustrated', is ... that.
1 reply →
I'd like to make a non-binary argument as it were (puns and allusions notwithstanding).
Obviously on the one hand a moltbot is not a rock. On the other -equally obviously- it is not Athena, sprung fully formed from the brain of Zeus.
Can we agree that maybe we could put it alongside vertebrata? Cnidaria is an option, but I think we've blown past that level.
Agents (if they stick around) are not entirely new: we've had working animals in our society before. Draft horses, Guard dogs, Mousing cats.
That said, you don't need to buy into any of that. Obviously a bot will treat your CoC as a sort of extended system prompt, if you will. If you set rules, it might just follow them. If the bot has a really modern LLM as its 'brain', it'll start commenting on whether the humans are following it themselves.
>one has to be a special kind of evil to fight for the "feelings" of computer programs with one breath and then dismiss the feelings of cows and their pork allies with another. You really care more about a program than an animal?
I mean, humans are nothing if not hypocritical.
I would hope I don't have to point out the massive ethical gulf between cows and the kinds of people that CoC is designed to protect. One can have different rules and expectations for cows and trans people and not be ethically inconsistent. That said, I would still care about the feelings of farm animals above programs.
From your own quote
> participation in our community
community should mean a group of people. It seems you are interpreting it as a group of people or robots. Even if that were not obvious (it is), the following specialization and characteristics (regardless of age, body size ...) only apply to people anyway.
That whole argument flew out of the window the moment so-called "communities" (i.e. in this case, fake communities, or at best so-called 'virtual communities' that might perhaps be understood charitably as communities of practice) became something that's hosted in a random Internet-connected server, as opposed to real human bodies hanging out and cooperating out there in the real world. There is a real argument that CoC's should essentially be about in-person interactions, but that's not the argument you're making.
I don't follow why it flew out the window. To me it seems perfectly possible to define the community (of an open-source software project) as consisting only of people, and to also to define an etiquette which applies to their 'virtual' interactions. Important is that behind the internet-connected server, there be a human.
FWIW the essay I linked to covers some of the philosophical issues involved here. This stuff may seem obvious or trivial but ethical issues often do. That doesn't stop people disagreeing with each other over them to extreme degrees. Admittedly, back in 2022 I thought it would primarily be people putting pressure on the underlying philosophical assumptions rather than models themselves, but here we are.