← Back to context

Comment by sjdjsin

3 days ago

> This is the part where I simply don't understand the objections people have to coding agents

Because I have a coworker who is pushing slop at unsustainable levels, and proclaiming to management how much more productive he is. It’s now even more of a risk to my career to speak up about how awful his PRs are to review (and I’m not the only one on the team who wishes to speak up).

The internet is rife with people who claim to be living in the future where they are now a 10x dev. Making these claims costs almost nothing, but it is negatively effecting mine and many others day to day.

I’m not necessarily blaming these internet voices (I don’t blame a bear for killing a hiker), but the damage they’re doing is still real.

I don't think you read the sentence you're responding to carefully enough. The antecedent of "this" isn't "coding agents" generally: it's "the value of an agent getting you past the blank page stage to a point where the substantive core of your feature functions well enough to start iterating on". If you want to respond to the argument I made there, you have to respond to the actual argument, not a broader one that's easier (and much less interesting) to take swipes at.

  • My understanding of your argument is:

    Because agents are good on this one specific axis (which I agree with and use fwiw), there’s no reason to object to them as a whole

    My argument is:

    The juice isn’t worth the squeeze. The small win (among others) is not worth the amounts of slop devs now have to deal with.

Not sure what to tell you, if there's a problem you have to speak up.

  • And the longer you wait, the worse it will be.

    Also, update your resume and get some applications out so you’re not just a victim.

What if your coworker was pushing tons of crap code and AI didn't exist? How would you deal with the situation then? Do that.

  • It's not the same because, with AI, they will likely be called anti-ai or anti-progress if they push back against it.

    • Don't mention AI, just point out why the code is bad. I've had co-workers who were vim wizards and others who literally hunt and pecked to type. At no point did their tools ever come up when reviewing their code. AI is a tool like anything else, treat it that way. This also means that the OPs default can't be AI == bad; focus on the result.

Maybe it's possible to use AI to help review the PRs and claim it's the AI making the PR's hyperproductive?

  • Yes, this. If you can describe why it is slop, an AI can probably identify the underlying issues automatically.

    Done right you should get mostly reasonable code out of the "execution focused peer".

    • In climate terms, or even simply in terms of $cost, this very much feels like throwing failing on a bonfire.

      Should we really advocate for using AI to both create and then destroy huge amounts of data that will never be used?

      4 replies →

    • > If you can describe why it is slop, an AI can probably identify the underlying issues automatically

      I would argue against this. Most of the time the things we find in review are due to extra considerations, often business, architectural etc, things which the AI doesn't have context of and it is quite bothersome to provide this context.

      1 reply →