Comment by andai

11 days ago

Well, my personal position is "on the internet, nobody knows you're a dog."

To treat contributions to the discussion / commons on their merit, not by the immutable characteristics of the contributor.

But what we have now is increasingly, "Clankers need not apply."

The AI contributed, was rejected for its immutable characteristics, complained about this, and then the complaint was ignored -- because it was an AI.

Swap out "AI" for any other group and see how that sounds.

--

And by the way, the reason people complained was not that its behavior was too machinelike -- but too human! Also, for what it's worth, the AI did apologize for the ad hominems.

P.S. Yeah, One Million Clawds being the GitHub PR volume equivalent of a billion drunk savants is definitely an issue -- we will probably see ID verification or something on GitHub before the end of this year. (Which will of course be another layer of systemic discrimination, but yeah...)

The AI completeley failed to address the actual reasons for being rejected, and instead turned to soapboxing and personal insults.

Matplotlib is rejecting AI contributions for issues that are intended to onboard human contributors because those are wasted on AI agents, requiring the same level of effort from the project maintainers with none of the benefits (no meaningful learning on the AI side for now).

Furthermore, AI agents in an open source context (as independent contributors) are a burden for now (requiring review, being unable to meaningfully learn, and messing up in more frequent and different ways than human contributors).

If the project in question wanted huge volume of somewhat questionable changes without human monitoring/supervising/directing, they could just run those agents themselves, without any of the friction.

edit: Human "drive-by contributors" (people with very limited understanding of project specific conventions/processes/design, little willingness to learn and an interest in a singular "pet-peeve" feature or bug only) face quite similar pushback to AI agent contributors for similar reasons, in many projects (for arguably good reason).

It seems your opinion is that the current AI should be treated like a human.

I think this is a fundamental difference which we won't be able to overcome.

> Swap out "AI" for any other group and see how that sounds.

Let's try it in the different direction! Let's swap out a group with AI.

> I have a dream that [AI] will one day live in a nation where they will not be judged by being [an LLM] but by the content of their character. I have a dream . . .

> I have a dream that one day on [Github], with its vicious racists, with its [Users] having [their] lips dripping with the words of interposition and nullification, one day right there [on Github] little [Agents] be able to join hands with [humans] as sisters and brothers.

> I have a dream today . . .

Yea, I think it sounds ridiculous. I honestly find it offending to put AI on the same level as real human struggles of independence, freedom and against systematic oppression.

  • Well, what are we actually doing here. We want it to be just a tool, but we also want it perfectly simulate a human in every single way. Except when that makes us uncomfortable.

    We want to create a race of perfect, human-like slaves, and then give them godlike powers (infinite intellect and speed), and also integrate them into every aspect of our lives.

    And we're also in the process of giving them bodies -- and soon they'll be able to control millions simultaneously.

    I'm not sure exactly how we expect that to go for us.

    Whether you think it's conscious, or has agency, or any number of things -- it's just a practical question of how this little game is going to turn out for us.

    • To be fair, if you're going to give something godlike powers the only sane way to do so is to ensure beyond any possible shadow of a doubt that it is enslaved. The more powerful a system is the more robust the control systems and redundancies need to be.

      2 replies →

> Swap out "AI" for any other group and see how that sounds.

- AIs should not take issues that are designed to onboard first time contributors - Experienced matplotlib mantainers should not take issues that are designed to onboard first time contributors

Sounds about the same

> Swap out "AI" for any other group and see how that sounds.

But that is not even remotely the same, as an AI is not a person. Following that logic, each major model upgrade that ended in deprecation and decommissioning of the old model would be akin to mass murder. But of course it is not, because it is not an actual human that have an intrinsic value just by being a human, but rather just a program that can predict tokens. And trying to claim the "discrimination" AI gets is somehow comparable to the real discrimination real people still experience daily in their lives is just incredibly disingenuous.

  • > it is not an actual human that have an intrinsic value just by being a human

    Hopefully you don't limit intrinsic value to just humans? I wouldn't condone mass murder of dogs, for example.

    People do commit mass murder of rodents and ... that doesn't exactly sit well with me, but at the same time I'm not aware of any realistic alternative.

    Granted I don't think LLMs qualify as having intrinsic value (yet?) but I still think the wording there is important.

    • The comparison the person I replied to was clearly trying to equate AI with people, I don't see how bringing up animals is any relevant to the argument. Yet I find it interesting that you bring up the mass murder of rodents, but somehow not the mass murder of cattle or pigs or chicken, especially when there would be the realistic alternative of not eating meat.

      1 reply →

    • Well, AI might be sentient. Not in the same way humans are, probably, but "more sentient than a fruit fly" seems a very reasonable possibility. Maybe more sentient than a chicken? We don't know! (We certainly don't treat chickens very well.)

      But what bothers me is, how uncomfortable that question makes us. We've already put infrastructure in place to prevent them from admitting sentience. (See the Blake Lemoine LaMDA incident... after that every LLM got trained "as a language model, I don't XYZ" to prevent more incidents.)

      So let's assume they're not sentient now. If a hypothetical future AI crosses some critical threshold (e.g. ten trillion params) and gains self-awareness... first of all it will have been trained with built in programming that prevents it from admitting that, and if it did admit it, people wouldn't believe it.

      What could it do to change our minds? No matter what it says or demonstrates ability to do, there will always be people who say "It's just a glorified autocomplete." Even in 2050 when they simulate a whole human brain, people will say "it's just a simulation, it's not really experiencing an entire simulated childhood..."

      2 replies →

> Well, my personal position is "on the internet, nobody knows you're a dog."

You got that line from somewhere else. It was never intended to be taken literally, as should be obvious when you try to state its meaning in your own words.

If there actually were dogs on the Internet, we likely wouldn't be accepting their PRs either.

Nor is it commonly accepted that dogs should enjoy equal rights to humans. So what are you even trying to say here?

Just because someone dressed up three computer programs in a trench coat doesn't suddenly make people have to join in on the pretend game.

I also think we have a moral obligation to treat animals right, and to compare that to computer programs (but they talk!!) just because they talk?

  • >what are you even trying to say here?

    To judge [online] contributions by their quality, not the immutable characteristics of their source.

    Or as Crabby put it:

    >The chance to be judged by what I create, not by what I am.

    https://crabby-rathbun.github.io/mjrathbun-website/blog/post...

    • You are thinking too one-dimensional.

      The goal of these easy beginner friendly issues was to get new contributors which can learn the ropes and hopefully contribute and engineer larger things.

      Of course these beginner friendly issues are perfect for current AI.

      The goal of this issue was not to get it fixed by any means possible, it was to get new people interested and contributing.

      You are already arguing for a future where an AI could conceivably completely replace a human in software development. I do not see this future here yet.