Comment by pixel_popping
8 hours ago
I still think it's insane, why would you care about the "origin" of the code as long as there is a human accountable (that you can ban anyway)?
8 hours ago
I still think it's insane, why would you care about the "origin" of the code as long as there is a human accountable (that you can ban anyway)?
Because you don't want to deal with people who can't write their own code. If they can, the rule will do nothing to stop them from contributing. It'll only matter if they simply couldn't make their contribution without LLMs.
So tomorrow, if a model genuinely find a bunch of real vulnerabilities, you just would ignore them? that makes no sense.
An LLM finding problems in code is not the same at all as someone using it to contribute code they couldn't write or haven't written themselves to a project. A report stating "There is a bug/security issue here" is not itself something I have to maintain, it's something I can react to and write code to fix, then I have to maintain that code.
Because they aren’t accountable - after it is merged only I am. And why would I want to go back and forth with an LLM through PR comments when I could just talk to the agent myself in real time? Anytime I want to work through a pile of slop I can ask for one, but I don’t work that way. I work with the agent to create plans first and refine them, and the author of a PR who couldn’t do that adds nothing.
> I work with the agent to create plans first and refine them, and the author of a PR who couldn’t do that adds nothing.
As someone who has been using AI extensively lately, this is my preferred way of doing serious projects with them:
Let them create the plan, help them refine it, let them rip; then scrutinize their diffs, fight back on the parts I don't like or don't trust; rinse and repeat until commit.
Yet I assume this would still be unacceptable to most anti-AI projects, because 90%+ of the committed code was "written by the AI."
> why would I want to go back and forth with an LLM through PR comments when I could just talk to the agent myself in real time?
Presumably for the same reason you go back and forth with humans through PR comments even when you could just code it yourself in real time. That reason being, the individual on the other end of the PR should be saving you time. It's still hard work contributing quality MRs, even with AI.
If your doctor told you he used an ouija board to find your diagnosis, would you care about the origin of the diagnosis or just trust that he'll be accountable for it?
If the Ouija board was powered by Opus, who knows :D