Comment by j45

3 days ago

Maybe it's possible to use AI to help review the PRs and claim it's the AI making the PR's hyperproductive?

Yes, this. If you can describe why it is slop, an AI can probably identify the underlying issues automatically.

Done right you should get mostly reasonable code out of the "execution focused peer".

  • In climate terms, or even simply in terms of $cost, this very much feels like throwing failing on a bonfire.

    Should we really advocate for using AI to both create and then destroy huge amounts of data that will never be used?

    • I don't think it is a long term solution. More like training wheels. Ideally the engineers learn to use AI to produce better code the first time. You just have a quality gate.

      Edit: Do I advocate for this? 1000%. This isn't crypto burning electricity to make a ledger. This objectively will make the life of the craftsmanship focused engineer easier. Sloppy execution oriented engineers are not a new phenomenon, just magnified with the fire hose that an agentic AI can be.

    • Who said anything about advocating for it.

      What can keep up with the scale of it?

      We know that AI is more capable by what's input into it for the prompt side so chances are code review might be a little more sensible.

      Maybe this comment/idea will be a breakthrough in improving AI coding. :p

    • The environmental cost of AI is mostly in training afaik. The inference energy cost is similar to the google searches and reddit etc loads you might do during handwritten dev last I checked. This might be completely wrong though

      1 reply →

  • > If you can describe why it is slop, an AI can probably identify the underlying issues automatically

    I would argue against this. Most of the time the things we find in review are due to extra considerations, often business, architectural etc, things which the AI doesn't have context of and it is quite bothersome to provide this context.

    • I generally agree that vague 1 shot prompting might vary.

      I also feel all of those things can be explained over time into a compendium that is input. For example, every time it is right, or wrong, comment and add it to an .md file. Better yet, have the CLI Ai tool append it.

      We know what is included as part of a prompt (like the above) is more accurately paid attention to.

      My intent isn't to make more work, it's just to make it easier to highlight the issues with code that's mindlessly generated, or is overly convoluted when a simple approach will do.