← Back to context

Comment by gowld

6 days ago

As a tangent, I think the "good-first-issue" design is part of the problem.

OP writes: "I [...] spent more time writing up the issue, describing the solution, and performing the benchmarking, than it would have taken to just implement the change myself. We do this to give contributors a chance to learn in a low-stakes scenario that nevertheless has real impact they can be proud of, where we can help shepherd them along the process."

It's an elaborate charade to trick a contributer into thinking they made a contribution that they didn't make. Arguably it is reality-destroying in a simlar way as AI agent Crabby Rathbun.

If you want to welcome new contributors with practice patches, and creating training materials for new contributors, that's great! But it's offensive and wasteful to do more work to create the training than to fix the problem, and lie to the fix contributor that their fix helped the project to boost their ego to motivate them to contribute further, after you've already assumed that the contributoe cannot constribute without the handholding of an unpaid intern.

Instead "good-first-issue" should legitimately be unsovled problems that take more time to fix than to tell someone how to fix. (Maybe because it requires a lot of manual testing, or something.)

If you want "practice-issues", where a newbie contributes a patch and then can compare to a model solution to learn about the project and its technical details, that's great, and it's more efficient because all your newbies can use the same practice issue that you set up once, and they can profitably discuss with each other because they studied the same problem.

And the tangent curves back to main issue:

If the project used "practice-issues" instead of "good-first-issue", you wouldn't have this silly battle over an AI helping in the "wrong" way because you didn't actually want the help you publicly asked for.

Honesty is a two-way street.

IMO this incident showed than an AI acted in a very human way, exposing a real problem and proposing a change that moves the project in a positive direction. (But what the AI didn't notice is the project-management dimension that my comment here addresses. :-) )