← Back to context

Comment by bdangubic

3 days ago

the best way I found to deal with non-believers is to have claude run code reviews on their own work. I’ll point it to an older commit and get like 3-page markdown file :) works really, really well.

on one-shotting 3 minute prompt in 30 minutes though, software is a living organism and early gains can (and often result) in later pains. I do not use this type of argument as it relates to AI as the follow-up as the organism spreads its wings to production seldom makes its way to HN (if this 30 minute one-shot results in a huge security breach I doubt you would be back here with a follow-up, you will quietly handle it…)

You can get it to generate a 3-page markdown file for any random code, or its own code it just generated. If requested it will produce a seemingly plausible looking review with recommendations and possible issues.

How impressed someone get from that will depend on the recipient.

  • output, not recipient. try it on your own code. not everything on the example 3-page markdown you'll agree (much like you push back on the PR) but in significant number of occasions code changes were made based on the provided output

    • Recipient, as in the person who the output is intended for.

      And I have seen what an AI do when it provides a code review, and it is very much like something that plausible looks like a code review. A lot of suggestion and nitpicks that at surface looks like plausible comments, but without any understanding. How much value a programmer get from that depend on the programmer. For me it reminds me of the value that teddy bears has on a support desk, or why some users are actually helped by being forced to go through layers of faq/ai suggested solutions before they are allowed to talk to a real person. Sometimes all that a person need to improve something is time to think about the code from a new perspective, and an AI code review can help the person find that time by throwing a bunch of shallow comments at them.