Comment by add-sub-mul-div

12 hours ago

> It doesn't matter if that's AI generated, generated with the help of other humans or typed up by a monkey.

It doesn't matter how true this should be in principle, in practice there are significant slop issues on the ground that we can't ignore and have to deal with. Context and subtext matter. It's already reasonable in some cases to trust contributions from different people differently based on who they are.

> Splitting out AI into it's own entity invites a word of issues, AI cannot take ownership of the bugs it writes

The old rules of reputation and shame are gone. The door is open to people who will generate and spam bad PRs and have nothing to lose from it.

Isolating the AI is the next best thing. It's still an account that's facing consequences, even if it's anonymous. Yes there are issues but there's no perfect solution in a world where we can't have good things anymore.

Most code was garbage before AI, and most engineers made significant mistakes. Very little code is not future tech debt. Review and testing has always been the only defense, reputation or skill of the committer is not.

  • > The old rules of reputation and shame are gone. The door is open to people who will generate and spam bad PRs and have nothing to lose from it.

    The important part here is that reputation creates an incentive to be conscious of what you're submitting in the first place, not that it grants you some free pass from review.

    There's been an unfortunate uptick in people submitting garbage they spent no time on and then whining about feedback because they trust what the AI put together more than their own skills and don't think it could be wrong.

  • The issue is the asymmetry between the time it takes to generate convincing AI slop and the time it takes to review it. The convincing part was still somewhat difficult when slop had to be written by hand.