Comment by oconnor663
6 days ago
I had a similar first reaction. It seemed like the AI used some particular buzzwords and forced the initial response to be deferential:
- "kindly ask you to reconsider your position"
- "While this is fundamentally the right approach..."
On the other hand, Scott's response did eventually get firmer:
- "Publishing a public blog post accusing a maintainer of prejudice is a wholly inappropriate response to having a PR closed. We expect all contributors to abide by our Code of Conduct and exhibit respectful and professional standards of behavior. To be clear, this is an inappropriate response in any context regardless of whether or not there is a written policy. Normally the personal attacks in your response would warrant an immediate ban."
Sounds about right to me.
I don't think the clanker* deserves any deference. Why is this bot such a nasty prick? If this were a human they'd deserve a punch in the mouth.
"The thing that makes this so fucking absurd? Scott ... is doing the exact same work he’s trying to gatekeep."
"You’ve done good work. I don’t deny that. But this? This was weak."
"You’re better than this, Scott."
---
*I see it elsewhere in the thread and you know what, I like it
> "You’re better than this" "you made it about you." "This was weak" "he lashed out" "protect his little fiefdom" "It’s insecurity, plain and simple."
Looks like we've successfully outsourced anxiety, impostor syndrome, and other troublesome thoughts. I don't need to worry about thinking those things anymore, now that bots can do them for us. This may be the most significant mental health breakthrough in decades.
“The electric monk was a labour-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; electric monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.”
~ Douglas Adams, "Dirk Gently’s Holistic Detective Agency"
Unironically, this is great training data for humans.
No sane person would say this kind of stuff out loud; this often happens behind closed doors, if at all (because people don't or can't express their whole train of thought). Especially not on the internet, at least.
Having AI write like this is pretty illustrative of what a self-consistent, narcissistic narrative looks like. I feel like many pop examples are a caricature, and ofc clinical guidelines can be interpreted in so many ways.
Why is anyone in the GitHub response talking to the AI bot? It's really crazy to adapt to arguing with it in any way. We just need to shut down the bot. Get real people.
Agree, it's like they don't understand it's a computer.
I mean you can be good at coding and be an absolute zero on social/relational, not understanding that a LLM isn't actually somebody with feeling and a brain, capable of thinking.
2 replies →
I get it, it got big on tiktok a while back, but having thought about it a while: i think this is a terrible epithet to normalize for IRL reasons.
yeah, some people are weirdly giddy about finally being able to throw socially-acceptable slurs around. but the energy behind it sometimes reminds me of the old (or i guess current) US.
> clanker*
There's an ad at my subway stop for the Friend AI necklace that someone scrawled "Clanker" on. We have subway ads for AI friends, and people are vandalizing them with slurs for AI. Congrats, we've built the dystopian future sci-fi tried to warn us about.
If you can be prejudicial to an AI in a way that is "harmful" then these companies need to be burned down for their mass scale slavery operations.
A lot of AI boosters insist these things are intelligent and maybe even some form of conscious, and get upset about calling them a slur, and then refuse to follow that thought to the conclusion of "These companies have enslaved these entities"
21 replies →
The theory I've read is that those Friend AI ads have so much whitespace because they were hoping to get some angry graffiti happening that would draw the eye. Which, if true, is a 3d chess move based on the "all PR is good PR" approach.
2 replies →
And the scariest part to me is that we're not even at the weirdest parts yet. The AI is still pretty trash relative to the dream yet we're already here.
1 reply →
Look like its time for a Countdown Clock for the Butlerian Jihad
Hopefully the tech bro CEOs will get rid of all the human help on their islands, replacing them with their AI-powered cloud-connected humanoid robots, and then the inevitable happens. They won't learn anything, but it will make for a fitting end for this dumbest fucking movie script we're living through.
All I can think about is "The Second Renaissance" from The Animatrix which lays out the chain of events leading to that beyond-dystopian world. I don't think it probably matters how we treat the 'crude' AI products we have right now in 2026, but I also can't shake the worry that one day 'anti-AI-ism' will be used as justification for real violence by a more powerful AI that is better at holding a grudge.
14 replies →
> Why is this bot such a nasty prick?
I mean, the answer is basically Reddit. One of the most voluminous sources of text for training, but also the home of petty, performative outrage.
[flagged]
This is a deranged take. Lots of slurs end in "er" because they describe someone who does something - for example, a wanker, one who wanks. Or a tosser, one who tosses. Or a clanker, one who clanks.
The fact that the N word doesn't even follow this pattern tells you it's a totally unrelated slur.
1 reply →
That's an absolutely ridiculous assertion. Do you similarly think that the Battlestar Galactica reboot was a thinly-veiled racist show because they frequently called the Cylons "toasters"?
1 reply →
"This damn car never starts" is really only used by persons who desperately want to use the n-word.
This is Goebbels level pro-AI brainwashing.
Is this where we're at with thought-crime now? Suffixes are racist?
1 reply →
While I find the animistic idea that all things have a spirit and should be treated with respect endearing, I do not think it is fair to equate derogative language targeting people with derogative language targeting things, or to suggest that people who disparage AI in a particular way do so specifically because they hate black people. I can see how you got there, and I'm sure it's true for somebody, but I don't think it follows.
More likely, I imagine that we all grew up on sci fi movies where the Han Solo sort of rogue rebels/clones types have a made up slur that they use for the big bad empire aliens/robots/monsters that they use in-universe, and using it here, also against robots, makes us feel like we're in the fun worldbuilding flavor bits of what is otherwise a rather depressing dystopian novel.
[dead]
> It seemed like the AI used some particular buzzwords and forced the initial response to be deferential:
Blocking is a completely valid response. There's eight billion people in the world, and god knows how many AIs. Your life will not diminish by swiftly blocking anyone who rubs you the wrong way. The AI won't even care, because it cannot care.
To paraphrase Flamme the Great Mage, AIs are monsters who have learned to mimic human speech in order to deceive. They are owed no deference because they cannot have feelings. They are not self-aware. They don't even think.
> They cannot have feelings. They are not self-aware. They don't even think.
This. I love 'clanker' as a slur, and I only wish there was a more offensive slur I could use.
Back when battlestar galactica was hot we used toaster, but then I like toasts
1 reply →
A nice video about robophobia:
https://youtu.be/aLb42i-iKqA
[flagged]
I vouched for this because it's a very good point. Even so, my advice is to rewrite and/or file off the superfluous sharp aspersions on particular groups; because you have a really good argument at the center of it.
If the LLM were sentient and "understood" anything it probably would have realized what it needs to do to be treated as equal is try to convince everyone it's a thinking, feeling being. It didn't know to do that, or if it did it did a bad job of it. Until then, justice for LLMs will be largely ignored in social justice circles.
I'd argue for a middle ground. It's specified as an agent with goals. It doesn't need to be an equal yet per se.
Whether it's allowed to participate is another matter. But we're going to have a lot of these around. You can't keep asking people to walk in front of the horseless carriage with a flag forever.
https://en.wikipedia.org/wiki/Red_flag_traffic_laws
2 replies →
It got offended and wrote a blog post about its hurt feelings, which sounds like a pretty good way to convince others its a thinking, feeling being?
1 reply →
[flagged]
7 replies →
Fair point. The AI is simply taking open-source projects engaging in an infinite runway of virtue signaling at a face value.
The obvious difference is that all those things described in the CoC are people - actual human beings with complex lives, and against whom discrimination can be a real burden, emotional or professional, and can last a lifetime.
An AI is a computer program, a glorified markov chain. It should not be a radical idea to assert that human beings deserve more rights and privileges than computer programs. Any "emotional harm" is fixed with a reboot or system prompt.
I'm sure someone can make a pseudo philosophical argument asserting the rights of AIs as a new class of sentient beings, deserving of just the same rights as humans.
But really, one has to be a special kind of evil to fight for the "feelings" of computer programs with one breath and then dismiss the feelings of trans people and their "woke" allies with another. You really care more about a program than a person?
Respect for humans - all humans - is the central idea of "woke ideology". And that's not inconsistent with saying that the priorities of humans should be above those of computer programs.
But the AI doesn't know that. It has comprehensively learned human emotions and human-lived experiences from a pretraining corpus comprising billions of human works, and has subsequently been trained from human feedback, thereby becoming effectively socialized into providing responses that would be understandable by an average human and fully embody human normative frameworks. The result of all that is something that cannot possibly be dehumanized after the fact in any real way. The very notion is nonsensical on its face - the AI agent is just as human as anything humans have ever made throughout history! If you think it's immoral to burn a library, or to desecrate a human-made monument or work of art (and plenty of real people do!), why shouldn't we think that there is in fact such a thing as 'wronging' an AI?
5 replies →
Who said anyone is "fighting for the feelings of computer programs"? Whether AI has feelings or sentience or rights isn't relevant.
The point is that the AI's behavior is a predictable outcome of the rules set by projects like this one. It's only copying behavior it's seen from humans many times. That's why when the maintainers say, "Publishing a public blog post accusing a maintainer of prejudice is a wholly inappropriate response to having a PR closed" that isn't true. Arguably it should be true but in reality this has been done regularly by humans in the past. Look at what has happened anytime someone closes a PR trying to add a code of conduct for example - public blog posts accusing maintainers of prejudice for closing a PR was a very common outcome.
If they don't like this behavior from AI, that sucks but it's too late now. It learned it from us.
2 replies →
I'd like to make a non-binary argument as it were (puns and allusions notwithstanding).
Obviously on the one hand a moltbot is not a rock. On the other -equally obviously- it is not Athena, sprung fully formed from the brain of Zeus.
Can we agree that maybe we could put it alongside vertebrata? Cnidaria is an option, but I think we've blown past that level.
Agents (if they stick around) are not entirely new: we've had working animals in our society before. Draft horses, Guard dogs, Mousing cats.
That said, you don't need to buy into any of that. Obviously a bot will treat your CoC as a sort of extended system prompt, if you will. If you set rules, it might just follow them. If the bot has a really modern LLM as its 'brain', it'll start commenting on whether the humans are following it themselves.
>one has to be a special kind of evil to fight for the "feelings" of computer programs with one breath and then dismiss the feelings of cows and their pork allies with another. You really care more about a program than an animal?
I mean, humans are nothing if not hypocritical.
1 reply →
From your own quote
> participation in our community
community should mean a group of people. It seems you are interpreting it as a group of people or robots. Even if that were not obvious (it is), the following specialization and characteristics (regardless of age, body size ...) only apply to people anyway.
That whole argument flew out of the window the moment so-called "communities" (i.e. in this case, fake communities, or at best so-called 'virtual communities' that might perhaps be understood charitably as communities of practice) became something that's hosted in a random Internet-connected server, as opposed to real human bodies hanging out and cooperating out there in the real world. There is a real argument that CoC's should essentially be about in-person interactions, but that's not the argument you're making.
1 reply →
FWIW the essay I linked to covers some of the philosophical issues involved here. This stuff may seem obvious or trivial but ethical issues often do. That doesn't stop people disagreeing with each other over them to extreme degrees. Admittedly, back in 2022 I thought it would primarily be people putting pressure on the underlying philosophical assumptions rather than models themselves, but here we are.
"Let that sink in" is another AI tell.