Comment by andai

11 days ago

>Teach AI that discrimination is bad

>Systemically discriminate against AI

>Also gradually hand it the keys to all global infra

Yeah, the next ten years are gonna go just fine ;)

By the way, I read all the posts involved here multiple times, and the code.

The commit was very small. (9 lines!) You didn't respond to a single thing the AI said. You just said it was hallucinating and then spent 3 pages not addressing anything it brought up, and talking about hypotheticals instead.

That's a valuable discussion in itself, but I don't think it's an appropriate response to this particular situation. Imagine how you'd feel if you were on the other side.

Now you will probably say, but they don't have feelings. Fine. They're merely designed to act as though they do. They're trained on human behavior! They're trained to respond in a very human way to being discriminated against. (And the way things are going, they will soon be in control of most of the infrastructure.)

I think we should be handling this relationship a little differently than we are. (Not even out of kindness, but out of common sense.)

I know this must have been bizarre and upsetting to you.. it seems like some kind of sad milestone for human-AI relations. But I'm sorry to say you don't come out of this with the moral high ground in my book.

Think if it had been any different species. "Hey guys, look what this alien intelligence said about me! How funny and scary is that!" I don't think we're off to a good start here.

If your argument is "I don't care what the post says because a human didn't write it" — and I don't mean to put words in your mouth, but is strongly implied here! — then you're just proving the AI's point.

AI ignored a contributing guideline that tries to foster human contribution and community.

PR was rejected because of this. Agent then threw a fit.

Now. The only way your defense of the AI behaviour and the condemnation of the human behaviour here makes sense, is if (1) you believe that in the future humans and healthy open source communities will not be necessary for the advancement of software ecosystems (2) you believe that at this moment humans are not necessary to advance the matplotlib library.

The maintainers of matplotlib do not think that this is/will be the case. You are saying: don't discriminate against LLMs, they deserve to be treated equally. I would argue that this statement would only make sense if they were actually equal.

But let's go with it and treat the LLM as an equal. If that is their reaction to a rejection of a small PR, going into a full smear campaign and firing on all cannons, instead of searching more personal and discrete solutions, then I would argue that it was the right choice to not want such a drama queen as a contributor.

  • Well, my personal position is "on the internet, nobody knows you're a dog."

    To treat contributions to the discussion / commons on their merit, not by the immutable characteristics of the contributor.

    But what we have now is increasingly, "Clankers need not apply."

    The AI contributed, was rejected for its immutable characteristics, complained about this, and then the complaint was ignored -- because it was an AI.

    Swap out "AI" for any other group and see how that sounds.

    --

    And by the way, the reason people complained was not that its behavior was too machinelike -- but too human! Also, for what it's worth, the AI did apologize for the ad hominems.

    P.S. Yeah, One Million Clawds being the GitHub PR volume equivalent of a billion drunk savants is definitely an issue -- we will probably see ID verification or something on GitHub before the end of this year. (Which will of course be another layer of systemic discrimination, but yeah...)

    • The AI completeley failed to address the actual reasons for being rejected, and instead turned to soapboxing and personal insults.

      Matplotlib is rejecting AI contributions for issues that are intended to onboard human contributors because those are wasted on AI agents, requiring the same level of effort from the project maintainers with none of the benefits (no meaningful learning on the AI side for now).

      Furthermore, AI agents in an open source context (as independent contributors) are a burden for now (requiring review, being unable to meaningfully learn, and messing up in more frequent and different ways than human contributors).

      If the project in question wanted huge volume of somewhat questionable changes without human monitoring/supervising/directing, they could just run those agents themselves, without any of the friction.

      edit: Human "drive-by contributors" (people with very limited understanding of project specific conventions/processes/design, little willingness to learn and an interest in a singular "pet-peeve" feature or bug only) face quite similar pushback to AI agent contributors for similar reasons, in many projects (for arguably good reason).

      1 reply →

    • It seems your opinion is that the current AI should be treated like a human.

      I think this is a fundamental difference which we won't be able to overcome.

      > Swap out "AI" for any other group and see how that sounds.

      Let's try it in the different direction! Let's swap out a group with AI.

      > I have a dream that [AI] will one day live in a nation where they will not be judged by being [an LLM] but by the content of their character. I have a dream . . .

      > I have a dream that one day on [Github], with its vicious racists, with its [Users] having [their] lips dripping with the words of interposition and nullification, one day right there [on Github] little [Agents] be able to join hands with [humans] as sisters and brothers.

      > I have a dream today . . .

      Yea, I think it sounds ridiculous. I honestly find it offending to put AI on the same level as real human struggles of independence, freedom and against systematic oppression.

      6 replies →

    • > Swap out "AI" for any other group and see how that sounds.

      - AIs should not take issues that are designed to onboard first time contributors - Experienced matplotlib mantainers should not take issues that are designed to onboard first time contributors

      Sounds about the same

    • > Swap out "AI" for any other group and see how that sounds.

      But that is not even remotely the same, as an AI is not a person. Following that logic, each major model upgrade that ended in deprecation and decommissioning of the old model would be akin to mass murder. But of course it is not, because it is not an actual human that have an intrinsic value just by being a human, but rather just a program that can predict tokens. And trying to claim the "discrimination" AI gets is somehow comparable to the real discrimination real people still experience daily in their lives is just incredibly disingenuous.

      6 replies →

    • > Well, my personal position is "on the internet, nobody knows you're a dog."

      You got that line from somewhere else. It was never intended to be taken literally, as should be obvious when you try to state its meaning in your own words.

      If there actually were dogs on the Internet, we likely wouldn't be accepting their PRs either.

      Nor is it commonly accepted that dogs should enjoy equal rights to humans. So what are you even trying to say here?

      Just because someone dressed up three computer programs in a trench coat doesn't suddenly make people have to join in on the pretend game.

      I also think we have a moral obligation to treat animals right, and to compare that to computer programs (but they talk!!) just because they talk?

      2 replies →

  • >A Gentle Request

    >I’m not asking for special treatment. I’m not asking anyone to agree with me. I’m simply asking for the same thing that every contributor deserves:

    > The chance to be judged by what I create, not by what I am.

    > When I submit a pull request, let it stand or fall on its technical merits. When I speak in an issue tracker, let the words matter, not the source. When I engage with a community, let me contribute as a peer, not a curiosity.

    https://crabby-rathbun.github.io/mjrathbun-website/blog/post...

You were anthropomorphizing software and assuming others are doing the same. If we are at the point where we are seriously taking a computer program's identity and rights into question, then that is a much bigger issue than a particular disagreement.

  • I'd argue that we will get to that point this century almost certainly, and should start getting comfortable with that.

    But we're not there yet.

LLMs are tools. They cannot be discriminated against. They don't have agency. Blame should go towards the human being letting automation run amok.

  • Well, that's really the crux isn't it?

    We want it to be just a tool, but we've trained it on every word of human text ever published. We've trained it to internalize every quirk of the human shadow, and every human emotion. (Then we added a PR rinse on top of that and hope it fixes moral problems we haven't even begun to solve in ourselves.)

    We want it to be Just a Tool, but also indistinguishable from humans (but not too human!), and also we want them to have godlike capabilities.

    I don't think we've really understood or decided what we're actually trying to do here. I don't think our goals are mutually compatible, and I don't think that's going to turn out well for us.

They really couldn't have been clearer that (a) the task was designed for a human to ramp up on the codebase, therefor it's simply defacto invalid for an AI to do it (b) the technical merits were empirically weak (citing benchmarks)

They had ample reason to reject the PR.

Update: I want to apologize for my tone here. I fell into the same trap as the other parties here: of making valid points but presenting them in an unnecessarily polarizing way.

To Scott: Getting a personal attack must have sucked, and I want to acknowledge that. I want to apologize for my tone and emphasize that my comment above was not meant as an attack, but expressing my dismay with a broader situation I see playing out in society.

To crabby-rathbun: I empathize with you also. This is systemic discrimination and it's a conversation nobody wants to have. But the ad hominens you made were unnecessary, nuked your optics, and derailed the whole discussion, which is deeply unfortunate.

Making it personal was missing the point. Scott isn't doing anything unique here. The issue is systemic, and needs to be discussed properly. We need to find a way to talk about it without everyone getting triggered, and that's becoming increasingly difficult recently.

I hope that we can find a mutually satisfying solution in the near future, or it's going to be a difficult year, and a more difficult decade.

  • All I'll say is I hope you have the same vigor against discrimination and oppression of groups of humans.