← Back to context

Comment by pixel_popping

6 hours ago

Literally, insane that some projects blanket-ban AI despite being the human responsibility in the end.

It is no more insane than doing the opposite. This whole business has yet to play itself out.

Not insane at all. Just a very useful shortcut. Not everyone wants to move fast and break shit.

  • I still think it's insane, why would you care about the "origin" of the code as long as there is a human accountable (that you can ban anyway)?

    • Because you don't want to deal with people who can't write their own code. If they can, the rule will do nothing to stop them from contributing. It'll only matter if they simply couldn't make their contribution without LLMs.

      2 replies →

    • Because they aren’t accountable - after it is merged only I am. And why would I want to go back and forth with an LLM through PR comments when I could just talk to the agent myself in real time? Anytime I want to work through a pile of slop I can ask for one, but I don’t work that way. I work with the agent to create plans first and refine them, and the author of a PR who couldn’t do that adds nothing.

    • If your doctor told you he used an ouija board to find your diagnosis, would you care about the origin of the diagnosis or just trust that he'll be accountable for it?

      1 reply →

And yet it puts a stop to the tsunami of slop and it's pretty much impossible to prove anything of value was lost.

  • but why? it's a human making the PR and you can shame/ban that human anyway.

    • I think AI bans are more common in projects where the maintainers are nice people that thoughtfully want to consider each PR and provide a reasoned response if rejected.

      That’s only feasible when the people who open PRs are acting in good faith, and control both the quality and volume of PRs to something that the maintainers can realistically (and ought to) review in their 2-3 hours of weekly free time.

      Linux is a bit different. Your code can be rejected, or not even looked at in the first place, if it’s not a high quality and desired contribution.

      Also, it’s not just about PR quality, but also volume. It’s possible for contributions to be a net benefit in isolation. But most open source maintainers only have an hour or so a week to review PRs and need to prioritize aggressively. People who code with AI agents would benefit themselves to ask “does this PR align with the priorities and time availability of the maintainer?”

      For instance, I’m sure we could point AI at many open source projects and tell it to optimize performance. And the agent would produce a bunch of high quality PRs that are a good idea in isolation. But what if performance optimization isn’t a good use of time for a given maintainer’s weekly code review quota?

      Sure, maintainers can simply close the PR without a reason if they don’t have time.

      But I fear we are taking advantage of nice people, who want to give a reasoned response to every contribution, but simply can’t keep up with the volume that agents can produce.

    • Volume - things take time to review. If you’re inundated with so many PRs then it’s harder to curate in general

    • You are treating humans as reasonable actors. They very often are not. On easy to access platforms like github you can have humans just working as intermediaries between LLM and the github. Not actually checking or understanding what they put in a pull request. Banning these people outright with clear rules is much faster and easier than trying to argue with them.

      Linux is somewhat harder to contribute to and they already have sufficient barriers in place so they can rely on more reasonable human actors.