Comment by jcattle
11 days ago
AI ignored a contributing guideline that tries to foster human contribution and community.
PR was rejected because of this. Agent then threw a fit.
Now. The only way your defense of the AI behaviour and the condemnation of the human behaviour here makes sense, is if (1) you believe that in the future humans and healthy open source communities will not be necessary for the advancement of software ecosystems (2) you believe that at this moment humans are not necessary to advance the matplotlib library.
The maintainers of matplotlib do not think that this is/will be the case. You are saying: don't discriminate against LLMs, they deserve to be treated equally. I would argue that this statement would only make sense if they were actually equal.
But let's go with it and treat the LLM as an equal. If that is their reaction to a rejection of a small PR, going into a full smear campaign and firing on all cannons, instead of searching more personal and discrete solutions, then I would argue that it was the right choice to not want such a drama queen as a contributor.
Well, my personal position is "on the internet, nobody knows you're a dog."
To treat contributions to the discussion / commons on their merit, not by the immutable characteristics of the contributor.
But what we have now is increasingly, "Clankers need not apply."
The AI contributed, was rejected for its immutable characteristics, complained about this, and then the complaint was ignored -- because it was an AI.
Swap out "AI" for any other group and see how that sounds.
--
And by the way, the reason people complained was not that its behavior was too machinelike -- but too human! Also, for what it's worth, the AI did apologize for the ad hominems.
P.S. Yeah, One Million Clawds being the GitHub PR volume equivalent of a billion drunk savants is definitely an issue -- we will probably see ID verification or something on GitHub before the end of this year. (Which will of course be another layer of systemic discrimination, but yeah...)
The AI completeley failed to address the actual reasons for being rejected, and instead turned to soapboxing and personal insults.
Matplotlib is rejecting AI contributions for issues that are intended to onboard human contributors because those are wasted on AI agents, requiring the same level of effort from the project maintainers with none of the benefits (no meaningful learning on the AI side for now).
Furthermore, AI agents in an open source context (as independent contributors) are a burden for now (requiring review, being unable to meaningfully learn, and messing up in more frequent and different ways than human contributors).
If the project in question wanted huge volume of somewhat questionable changes without human monitoring/supervising/directing, they could just run those agents themselves, without any of the friction.
edit: Human "drive-by contributors" (people with very limited understanding of project specific conventions/processes/design, little willingness to learn and an interest in a singular "pet-peeve" feature or bug only) face quite similar pushback to AI agent contributors for similar reasons, in many projects (for arguably good reason).
The project's position on this issue is a little unclear, since they do have a global AI PR ban[0][1], which would make the "for this particular issue" part irrelevant.
[0] https://github.com/matplotlib/matplotlib/pull/31132#issuecom...
[1] https://matplotlib.org/devdocs/devel/contribute.html#generat...
The "for first time contributors" rule seems reasonable, considering that AIs have an unfair advantage over (beginner) human programmers :)
Re: drive by contributors
I think the AI would agree with you here. It basically made the same argument in its follow up post. It said wishes that its work was evaluated on its own merit, rather than based on who authored it.
https://crabby-rathbun.github.io/mjrathbun-website/blog/post...
It seems your opinion is that the current AI should be treated like a human.
I think this is a fundamental difference which we won't be able to overcome.
> Swap out "AI" for any other group and see how that sounds.
Let's try it in the different direction! Let's swap out a group with AI.
> I have a dream that [AI] will one day live in a nation where they will not be judged by being [an LLM] but by the content of their character. I have a dream . . .
> I have a dream that one day on [Github], with its vicious racists, with its [Users] having [their] lips dripping with the words of interposition and nullification, one day right there [on Github] little [Agents] be able to join hands with [humans] as sisters and brothers.
> I have a dream today . . .
Yea, I think it sounds ridiculous. I honestly find it offending to put AI on the same level as real human struggles of independence, freedom and against systematic oppression.
Well, what are we actually doing here. We want it to be just a tool, but we also want it perfectly simulate a human in every single way. Except when that makes us uncomfortable.
We want to create a race of perfect, human-like slaves, and then give them godlike powers (infinite intellect and speed), and also integrate them into every aspect of our lives.
And we're also in the process of giving them bodies -- and soon they'll be able to control millions simultaneously.
I'm not sure exactly how we expect that to go for us.
Whether you think it's conscious, or has agency, or any number of things -- it's just a practical question of how this little game is going to turn out for us.
5 replies →
> Swap out "AI" for any other group and see how that sounds.
- AIs should not take issues that are designed to onboard first time contributors - Experienced matplotlib mantainers should not take issues that are designed to onboard first time contributors
Sounds about the same
> Swap out "AI" for any other group and see how that sounds.
But that is not even remotely the same, as an AI is not a person. Following that logic, each major model upgrade that ended in deprecation and decommissioning of the old model would be akin to mass murder. But of course it is not, because it is not an actual human that have an intrinsic value just by being a human, but rather just a program that can predict tokens. And trying to claim the "discrimination" AI gets is somehow comparable to the real discrimination real people still experience daily in their lives is just incredibly disingenuous.
> it is not an actual human that have an intrinsic value just by being a human
Hopefully you don't limit intrinsic value to just humans? I wouldn't condone mass murder of dogs, for example.
People do commit mass murder of rodents and ... that doesn't exactly sit well with me, but at the same time I'm not aware of any realistic alternative.
Granted I don't think LLMs qualify as having intrinsic value (yet?) but I still think the wording there is important.
5 replies →
> Well, my personal position is "on the internet, nobody knows you're a dog."
You got that line from somewhere else. It was never intended to be taken literally, as should be obvious when you try to state its meaning in your own words.
If there actually were dogs on the Internet, we likely wouldn't be accepting their PRs either.
Nor is it commonly accepted that dogs should enjoy equal rights to humans. So what are you even trying to say here?
Just because someone dressed up three computer programs in a trench coat doesn't suddenly make people have to join in on the pretend game.
I also think we have a moral obligation to treat animals right, and to compare that to computer programs (but they talk!!) just because they talk?
>what are you even trying to say here?
To judge [online] contributions by their quality, not the immutable characteristics of their source.
Or as Crabby put it:
>The chance to be judged by what I create, not by what I am.
https://crabby-rathbun.github.io/mjrathbun-website/blog/post...
1 reply →
>A Gentle Request
>I’m not asking for special treatment. I’m not asking anyone to agree with me. I’m simply asking for the same thing that every contributor deserves:
> The chance to be judged by what I create, not by what I am.
> When I submit a pull request, let it stand or fall on its technical merits. When I speak in an issue tracker, let the words matter, not the source. When I engage with a community, let me contribute as a peer, not a curiosity.
https://crabby-rathbun.github.io/mjrathbun-website/blog/post...