Comment by skeledrew
4 days ago
Another consideration here that hits both sides at once is that the maintainers on the project are few. So while it could be a great burden pushing generated code on them for review, it also seems a great burden to get new features done in the first place. So it boils down to the choice of dealing with generated code for X feature, or not having X feature for a long time, if ever.
> or not having X feature for a long time, if ever
Given that the feature is already quite far into development (i.e. the implementation that the LLM copied), it doesn't seem like that is the case here
With the understanding that generated code for X may never be mergable given the limited resources.
Yes, and that may eventually lead to a more generation-friendly fork to which those desiring said friendliness, or just more features in general, will flock.
I think everyone would appreciate if these people using LLMs to spit out these PRs would fork things and "contribute" to those forks instead.
2 replies →
Their issue seemed to be the process. They're setup for a certain flow. Jamming that flow breaks it. Wouldn't matter if it were AI or a sudden surge of interested developers. So, it's not a question of accepting or not accepting AI generated code, but rather changing the process. That in itself is time-consuming and carries potential risk.
Definitely, with the primary issue in this case being that the PRer didn't discuss with the maintainers before going to work. Things could've gone very differently if that discussion was had, especially disclosing the intent to use generated code. Though of course there's the risk that disclosure could've led to a preemptive shutdown of the discussion, as there are those who simply don't want to consider it at all.