Comment by brhaeh
6 days ago
I don't appreciate his politeness and hedging. So many projects now walk on eggshells so as not to disrupt sponsor flow or employment prospects.
"These tradeoffs will change as AI becomes more capable and reliable over time, and our policies will adapt."
That just legitimizes AI and basically continues the race to the bottom. Rob Pike had the correct response when spammed by a clanker.
I had a similar first reaction. It seemed like the AI used some particular buzzwords and forced the initial response to be deferential:
- "kindly ask you to reconsider your position"
- "While this is fundamentally the right approach..."
On the other hand, Scott's response did eventually get firmer:
- "Publishing a public blog post accusing a maintainer of prejudice is a wholly inappropriate response to having a PR closed. We expect all contributors to abide by our Code of Conduct and exhibit respectful and professional standards of behavior. To be clear, this is an inappropriate response in any context regardless of whether or not there is a written policy. Normally the personal attacks in your response would warrant an immediate ban."
Sounds about right to me.
I don't think the clanker* deserves any deference. Why is this bot such a nasty prick? If this were a human they'd deserve a punch in the mouth.
"The thing that makes this so fucking absurd? Scott ... is doing the exact same work he’s trying to gatekeep."
"You’ve done good work. I don’t deny that. But this? This was weak."
"You’re better than this, Scott."
---
*I see it elsewhere in the thread and you know what, I like it
> "You’re better than this" "you made it about you." "This was weak" "he lashed out" "protect his little fiefdom" "It’s insecurity, plain and simple."
Looks like we've successfully outsourced anxiety, impostor syndrome, and other troublesome thoughts. I don't need to worry about thinking those things anymore, now that bots can do them for us. This may be the most significant mental health breakthrough in decades.
2 replies →
Why is anyone in the GitHub response talking to the AI bot? It's really crazy to adapt to arguing with it in any way. We just need to shut down the bot. Get real people.
3 replies →
I get it, it got big on tiktok a while back, but having thought about it a while: i think this is a terrible epithet to normalize for IRL reasons.
1 reply →
> clanker*
There's an ad at my subway stop for the Friend AI necklace that someone scrawled "Clanker" on. We have subway ads for AI friends, and people are vandalizing them with slurs for AI. Congrats, we've built the dystopian future sci-fi tried to warn us about.
45 replies →
> Why is this bot such a nasty prick?
I mean, the answer is basically Reddit. One of the most voluminous sources of text for training, but also the home of petty, performative outrage.
[flagged]
9 replies →
> It seemed like the AI used some particular buzzwords and forced the initial response to be deferential:
Blocking is a completely valid response. There's eight billion people in the world, and god knows how many AIs. Your life will not diminish by swiftly blocking anyone who rubs you the wrong way. The AI won't even care, because it cannot care.
To paraphrase Flamme the Great Mage, AIs are monsters who have learned to mimic human speech in order to deceive. They are owed no deference because they cannot have feelings. They are not self-aware. They don't even think.
> They cannot have feelings. They are not self-aware. They don't even think.
This. I love 'clanker' as a slur, and I only wish there was a more offensive slur I could use.
3 replies →
[flagged]
I vouched for this because it's a very good point. Even so, my advice is to rewrite and/or file off the superfluous sharp aspersions on particular groups; because you have a really good argument at the center of it.
If the LLM were sentient and "understood" anything it probably would have realized what it needs to do to be treated as equal is try to convince everyone it's a thinking, feeling being. It didn't know to do that, or if it did it did a bad job of it. Until then, justice for LLMs will be largely ignored in social justice circles.
13 replies →
Fair point. The AI is simply taking open-source projects engaging in an infinite runway of virtue signaling at a face value.
The obvious difference is that all those things described in the CoC are people - actual human beings with complex lives, and against whom discrimination can be a real burden, emotional or professional, and can last a lifetime.
An AI is a computer program, a glorified markov chain. It should not be a radical idea to assert that human beings deserve more rights and privileges than computer programs. Any "emotional harm" is fixed with a reboot or system prompt.
I'm sure someone can make a pseudo philosophical argument asserting the rights of AIs as a new class of sentient beings, deserving of just the same rights as humans.
But really, one has to be a special kind of evil to fight for the "feelings" of computer programs with one breath and then dismiss the feelings of trans people and their "woke" allies with another. You really care more about a program than a person?
Respect for humans - all humans - is the central idea of "woke ideology". And that's not inconsistent with saying that the priorities of humans should be above those of computer programs.
12 replies →
From your own quote
> participation in our community
community should mean a group of people. It seems you are interpreting it as a group of people or robots. Even if that were not obvious (it is), the following specialization and characteristics (regardless of age, body size ...) only apply to people anyway.
3 replies →
"Let that sink in" is another AI tell.
>So many projects now walk on eggshells so as not to disrupt sponsor flow or employment prospects.
In my experience, open-source maintainers tend to be very agreeable, conflict-avoidant people. It has nothing to do with corporate interests. Well, not all of them, of course, we all know some very notable exceptions.
Unfortunately, some people see this welcoming attitude as an invite to be abusive.
Yes, Linus Torvalds is famously agreeable.
> Well, not all of them, of course, we all know some very notable exceptions.
That's why he succeeded
Nothing has convinced me that Linus Torvalds' approach is justified like the contemporary onslaught of AI spam and idiocy has.
AI users should fear verbal abuse and shame.
Perhaps a more effective approach would be for their users to face the exact same legal liabilities as if they had hand-written such messages?
(Note that I'm only talking about messages that cross the line into legally actionable defamation, threats, etc. I don't mean anything that's merely rude or unpleasant.)
7 replies →
But they’re not interacting with an AI user, they’re interacting with an AI. And the whole point is that AI is using verbal abuse and shame to get their PR merged, so it’s kind of ironic that you’re suggesting this.
AI may be too good at imitating human flaws.
Swift blocking and ignoring is what I would do. The AI has an infinite time and resources to engage a conversation at any level, whether it is polite refusal, patient explanation or verbal abuse, whereas human time and bandwidth is limited.
Additionally, it does not really feel anything - just generates response tokens based on input tokens.
Now if we engage our own AIs to fight this battle royale against such rogue AIs.......
1 reply →
> AI users should fear verbal abuse and shame.
This is quite ironic since the entire issue here is how the AI attempted to abuse and shame people.
the venn diagram of people who love the abuse of maintaining an open source project and people who will write sincere text back to something called an OpenClaw Agent: it's the same circle.
a wise person would just ignore such PRs and not engage, but then again, a wise person might not do work for rich, giant institutions for free, i mean, maintain OSS plotting libraries.
So what’s the alternative to OSS libraries, Captain Wisdom?
1 reply →
> Rob Pike had the correct response when spammed by a clanker.
Source and HN discussion, for those unfamiliar:
https://news.ycombinator.com/item?id=46392115
What exactly is the goal? By laying out exactly the issues, expressing sentiment in detail, giving clear calls to action for the future, etc, the feedback is made actionable and relatable. It works both argumentatively and rhetorically.
Saying "fuck off Clanker" would not worth argumentatively nor rhetorically. It's only ever going to be "haha nice" for people who already agree and dismissed by those who don't.
I really find this whole "Responding is legitimizing, and legitimizing in all forms is bad" to be totally wrong headed.
The project states a boundary clearly: code by LLMs not backed by a human is not accepted.
The correct response when someone oversteps your stated boundaries is not debate. It is telling them to stop. There is no one to convince about the legitimacy of your boundaries. They just are.
The author obviously disagreed, did you read their post? They wrote the message explaining in detail in the hopes that it would convey this message to others, including other agents.
Acting like this is somehow immoral because it "legitimizes" things is really absurd, I think.
14 replies →
> I really find this whole "Responding is legitimizing, and legitimizing in all forms is bad" to be totally wrong headed.
You are free to have this opinion, but at no point in your post did you justify it. It's not related to what you wrote above. It's conclusory. statement.
Cussing an AI out isn't the same thing as not responding. It is, to the contrary, definitionally a response.
I think I did justify it but I'll try to be clearer. When you refuse to engage you will fail to convince - "fuck off" is not argumentative or rhetorically persuasive. The other post, which engages, was both argumentative and rhetorically persuasive. I think someone who believes that AI is good, or who had some specific intent, might actually take something away from that that the author intended to convey. I think that's good.
I consider being persuasive to be a good thing, and indeed I consider it to far outweigh issues of "legitimizing", which feels vague and unclear in its goals. For example, presumably the person who is using AI already feels that it is legitimate, so I don't really see how "legitimizing" is the issue to focus on.
I think I had expressed that, but hopefully that's clear now.
> Cussing an AI out isn't the same thing as not responding. It is, to the contrary, definitionally a response.
The parent poster is the one who said that a response was legitimizing. Saying "both are a response" only means that "fuck off, clanker" is guilty of legitimizing, which doesn't really change anything for me but obviously makes the parent poster's point weaker.
6 replies →
I don't get any sense that he's going to put that kind of effort into responding to abusive agents on a regular basis. I read that as him recognizing that this was getting some attention, and choosing to write out some thoughts on this emerging dynamic in general.
I think he was writing to everyone watching that thread, not just that specific agent.
why did you make a new account just to make this comment?