Comment by anal_reactor
16 hours ago
I never review PRs, I always rubber-stamp them, unless they come from a certified idiot:
1. I don't care because the company at large fails to value quality engineering.
2. 90% of PR comments are arguments about variable names.
3. The other 10% are mistakes that have very limited blast radius.
It's just that, unless my coworker is a complete moron, then most likely whatever they came up with is at least in acceptable state, in which case there's no point delaying the project.
Regarding knowledge share, it's complete fiction. Unless you actually make changes to some code, there's zero chance you'll understand how it works.
I'm very surprised by these comments...
I regularly review code that is way more complicated that it should.
The last few days I was going back and forth on reviews on a function that had originally cyclomatic complexity of 23. Eventually I got it down to 8, but I had to call him into a pair programming session and show him how the complexity could be reduced.
Someone giving work like that should be either junior enough that there is potential for training them, so your time investment is worth it, or managed out.
Or it didn't really matter that the function was complex if the structure of what's surrounding it was robust and testable; just let it be a refactor or bug ticket later.
he is a junior yes.
I know the aggravation of getting a hairball of code to review, but I often hold my nose. At least find a better reason to send it back, like a specific bug.
If you're sure cyclomatic complexity should be minimized, I think you should put such rules in a pre-commit hook or something that runs before a reviewer ever sees the code. You should only have to help with that if someone can't figure out how to make it pass.
If you're not willing or politically able to implement that, you might be wasting time on your personal taste that the team doesn't agree with. Personally I'm pretty skeptical of cyclomatic complexity's usefulness as a metric.
I just used it here to approximately convey the scale.
the original function was full of mutable state (not required), full of special cases (not required), full of extra return statements (not required). Also had some private helper methods that were mocked in the tests (!!!).
All of this just for a "pure" function. Just immutable object in - immutable object out.
and yes, he was a junior.
I always approve a change with comments for nits that are optional to address. I only hold back approval if there is a legitimate flaw of some sort. Generally this leads to small changes almost always getting approved on the first shot, but larger changes needing at least one back and forth. AI code review tools make it much easier to spot legitimate problems these days.
Do people really argue about variable names? Most reviews comments I see are fairly trivial, but almost always not very subjective. (Leftover debug log, please add comment here, etc) Maybe it helps that many of our seniors are from a team where we had no auto-formatter or style guide at all for quite a while. I think everyone should experience that a random mix of `){` and `) {` does not really impact you in any way beyond the mild irking of a crooked painting or something. There's a difference between aesthetically bothersome and actually harmful. Not to say that you shouldn't run a formatter, but just for some perspective.
>Do people really argue about variable names?
Of course they do. A program's code is mostly a graph of names; they can be cornerstones of its clarity, or sources of confusion and bugs.
The first thing I do when debugging is ensuring proper names, sometimes that's enough to make the bug obvious.
The greatest barrier to understanding is not lack of knowledge but incorrect knowledge. That's why good names matter. And naming things is hard, which is why it makes sense to comment on variable names in a review.
1 reply →
I have seen this mostly on teams which refuse to formalize preferences into a style guide.
I have fixed this by forcing the issue and we get together as a team, set a standard and document it. If we can use tools to enforce it automatically we do that. If not you get a comment with a link to the style guide and told to fix it.
Style is subjective but consistency is not. Having a formal style guide which is automatically enforced helps with onboarding and code review as well.
Yes. 80% of comments to my PRs are "change _ to -" or something like that.
PR #467 - Reformat code from tabs to spaces
PR #515 - Reformat code from spaces to tabs
> 2. 90% of PR comments are arguments about variable names.
This sort of comment is meaningless noise that people add to PRs to pad their management-facing code review stats. If this is going on in your shop, your senior engineers have failed to set a suitable engineering culture.
If you are one of the seniors, schedule a one-on-one with your manager, and tell them in no uncertain terms that code review stats are off-limits for performance reviews, because it's causing perverse incentives that fuck up the workflow.
The most senior guy has the worst reviews because it takes multiple rounds, each round finds new problems. Manager thinks this contributes to code quality. I was denied promotion because I failed to convince half of the company to drop everything and do my manager's pet project that had literally zero business value.
Yeah, I'm afraid that's an engineering culture that is thoroughly cooked. Not much choice except keep your head down until you are ready to cut your losses
That seems a lot about the company and the culture rather than about how code review is supposed to work.
I have been involved in enough code reviews both in a corporate environment and in open source projects to know this is an outlier. When code review is done well, both the author and reviewer learn from the experience.
People always makes mistakes. Like forgetting to include a change. The point of PRs for me is to try to weed out costly mistakes. Automated tests should hopefully catch most of them though.
The point of PRs is not to avoid mistakes (though sometimes this can happen). Automated tests are the tool to weed out those kinds of mistakes. The point of PRs is to spread knowledge. I try to read every PR, even if it's already approved, so I'm aware of what changes there are in code I'm going to own. They are the RSS feed of the codebase.
I used to do this! I can’t anymore, not with the advent of AI coding agents.
My trust in my colleagues is gone, I have no reason to believe they wrote the code they asked me to put my approval on, and so I certainly don’t want to be on a postmortem being asked why I approved the change.
Perhaps if I worked in a different industry I would feel like you do, but payments is a scary place to cause downtime.
As far as I'm concerned if I approved the PR I'm equally responsible for it as the author is. I never make nitpick comments and I still have to point out meaningful mistakes in around 30% of reviews. The percentage has only risen with AI slop.
[dead]