While I am a proponent of code reviews, what this article actually described is mentoring of a junior engineer by a senior engineer, and requiring effective testing to be implemented alongside the feature.
It also shows a broken culture where the other reviewers were not engaged and committed to the success of the feature. When a bug escapes, it's both the author _and_ reviewer at fault.
Disagree. It's only the author's fault. We can't expect engineers to write code and find every last bug in other people's code especially when PRs can be thousands of lines to review.
A broken culture would be expecting engineers to find every last bug in other people's code and blaming them for bugs.
When issues happen, it is almost never just one person's fault.
Just trace the Whys:
1) Why did the bug happen? These lines of code
2) Why wasn't that caught by tests? The test was broken / didn't exist
3) How was the code submitted without a test? The reviewer didn't call it out.
Arguing about percentages doesn't matter. What matters is fixing the team process so this doesn't happen again. This means formally documented standards for test coverage and education and building team culture so people understand why it's important.
It's not "just the author's fault". That kind of blame game is toxic to teams.
Our code review process involves a reviewer explicitly taking ownership of the PR, and I think it's a reasonable expectation with the right practices around it. A good reviewer will request a PR containing 1000s of lines be broken up without doing much more than skimming to make sure it isn't bulked up by generated code or test data or something benign like that.
Google has a culture of small PRs. If your PR is too big (think more than 500 lines changed) you’ll actually not be able to get readability reviews on them - unless someone in your team does the readability review for you.
He's obviously self-praising for a promotion or looking for another job. I don't think it should be taken seriously on the matter of the effectiveness of code-reviews
> I also pushed for breaking large changes into smaller commits, because it is impossible to do a thorough review otherwise.
I've found this to be key for both giving and receiving feedback. Even small breaking commits are better in a lot of cases because they invite feedback in a way that 1000+ lines don't.
I miss Phabricator from my time at Meta so much, I made this to achieve a Phabricator-like stacked-commit experience via the git cli: https://pypi.org/project/stacksmith/
This looks really interesting. I've been using a similar tool called spr https://github.com/spacedentist/spr for the last six or so months at work. I really like the stacked diff/PR workflow. But spr has a lot of rough edges and im on the lookout for a better alternative.
Do you happen to know how your tool compares to spr? How production-ready is it?
Breaking down the size of the change is truly important, otherwise it's easy to miss things and to also disregard them as little details when wanting to avoid blocking the whole change on a "small" thing (which may only seem small because the PR is now huge)
But he does back it up with actual facts (as far as we can trust the author to tell the truth) - the feature that the author gave feedback on shipped without any issues. (The article actually doesn't say whether A was fixed-and-bug-free before B shipped, but it certainly sounds like B was less stressful to ship.)
It’s also a biased view. The author admit that the feature he was involved in took longer to ship initially. Depending on the environment this can be an anti-pattern; don’t we say “release early, release often”.
In the same vein; the author says that the other feature took several releases to be stable. Were the other release purely bug fixes or did that give the engineer a chance to get early feedback and incorporate that into the feature ?
It’s clear that the author prefers a slow and careful approach, and he judges “success” and “failure” by that metric. It sometimes is the right approach. It sometimes isn’t.
Only somewhat related, but I'd pay decent money to have access to the whole Piper/CitC/Critique/Code Search stack. As much as I've tried to like it, I just don't really like Github's code review tool.
Github's code review tool is uniquely bad. Notably it presents every comment as blocking and requiring sign off - even a "Glad someone cleaned this up! <thumbs up emoji>" needs clearing before merge.
It also has some UX snafus that cause reviewers to write a number of comments and then forget to publish any of them, leading to a lot of interactions along the lines of "I thought you were going to review my PR?" "I already did?"
Requiring every comment to be resolved is not a standard part of GitHub’s code review system. That is something which your organization has gone out of its way to require.
It's great if you have a team that does code reviews. It works less well for reviewing contributions from external contributors on an open-source project,a as the contributor likely just wants to get their PR merged and doesn't want to learn a special reviewing tool.
Do you know if this works with Azure DevOps? I hate their UI. At this point I'd love to use Github. But for some reason the higher ups want us to be on Azure DevOps.
Shameless plug but since you asked ... CodeApprove (https://codeapprove.com) is probably the closest thing you can get to Critique on GitHub. It doesn't help with the Piper/CitC/Code Search parts though, and I agree those were excellent.
In my experience, mandatory code reviews seldom work very well. Usually it's either stamping PRs or you always run the risk of someone just blocking stuff in an unreasonable manner.
For the most part, code reviews should be optional - if you want to get a review from someone, tag them on your PR and ask. If someone you didn't tag spots something and your PR landed, you can always figure it out and still make a fix.
I will give an exception to maybe super fragile parts of the code but ideally you can refactor/build tests/do something else that doesn't require blocking code review to land changes.
I have trouble with code reviews because I can't run the code from the review GUI, so I usually look at the tests run in CI or pull and run it myself, if possible. Is this not a problem other people have? Is this a tooling problem?
By putting breakpoints in the code, and seeing what the changed lines do, I can compare the output line by line with my mental model of what should be happening, and this thoroughly helps me verify the changes.
I don't think I ever try to verify the correctness of the code in code review. That seems hopeless to me. The original developer should be assumed to produce working code, which is covered with tests and maybe will go through QA (if you have that). Of course there are occasional bugs you can spot, but the main goal is to review the architectural approach and code clarity. You can also note if something isn't properly covered with tests. If you managed to make any security, performance, or edge case handing suggestions, that's a nice bonus.
Maybe we're in a very different context, but to me if you can't understand what the code does without a debugger, then you ask the author to make the code clearer.
I stopped reading after that opening paragraph. I don't know of anyone I take seriously who thinks that code reviews are bad practice or pure red tape.
I’ve never worked somewhere where mandatory PR reviews didn’t turn into mostly red tape.
The pressure to get work done faster in the long term always wins out over other concerns and you end up with largely surface level speed reviews that don’t catch much of anything. At best they tend to enforce taste.
In 20 years across many companies and thousands of PRs, I’ve never had a reviewer catch a single major bug (a bug that would have required declaring an incident) that would have otherwise gone out. I've pushed a few major bugs to production, but they always made it through review.
I’ve been doing this since well before everyone was using GitHub and requiring PR reviews for every merge. I haven’t noticed a significant uptick in software quality since the switch.
The cost is high enough that I’d like to see some hard evidence justifying it. It just seems to be something people who have never done any different take on faith.
I know an idi*t who claimed, in a code review, that there was a memory leak just by looking at the code (turned out there wasn't). Clearly it was a bullying attempt to stop someone else's progress. Unfortunately it was successful because of people like the ones downvoting you.
Mandatory code review definitely creates red tape. Every place I've been with mandatory code review, I always see people "looking for a stamp".
At my current job, code review requirements are set on a per-folder basis, with many folders not requiring a review. People ask for a review because they want a review (or, sometimes, they dont. For example, I don't ask someone to review every quick one-liner of the systems I am an expert in).
Sure there's some subset of commits where all that makes sense is a stamp. On good teams it at least still ensures that two people agree that a stamp is appropriate. Knowing that you need to be able to convince someone it's good before it goes in is a good forcing function reducing sloppiness and laziness.
Then you are very lucky. I definitely have met those sorts. I’m even aware of teams that collectively push to the main brain under the promise that they’ll probably look at each other’s code later, maybe.
I saw no proof of the later review for all pushes.
> Code reviews have a bad rap: they are antagonistic in nature and, sometimes, pure red tape.
I wonder if folks know that this is a job? What are you gonna do, not do it? Cry at night because you forgot for the hundreth time to add token strings to translation files and not be hard-coded? Come on.
A few days ago there was an article on HN on how engineers abuse code reviews. It's just a tool, the outcome is different based on who's reviewing. If you think code review is intrinsically good then I'm glad I'm not working with you either
You would be surprised! I have encountered the attitude that code reviews are a waste of time. It's not common, and I have never seen this attitude "win" across a team/company but it definitely exists. Some engineers are just overconfident, they believe they could fix everything if everyone would just let them code.
Are you using AI for public contributions? Or a private repo? How does it deal with things like project conventions? e.g. "Currency should be represented using a Money class", "This method should use our utility class", etc. Do you upload the entire project's source code?
The review you get depends what questions you ask. For example: I wrote a class that wrapped a std::vector<T> as a private data member and it pointed out that it would be nice if I implemented support for accessing the iterators and the array subscripts. It made these remarks based on how I was using the object. I have uploaded an entire repo to Gemini (as a single file) and asked for broad and fine reviews. It's really quite good.
While I am a proponent of code reviews, what this article actually described is mentoring of a junior engineer by a senior engineer, and requiring effective testing to be implemented alongside the feature.
It also shows a broken culture where the other reviewers were not engaged and committed to the success of the feature. When a bug escapes, it's both the author _and_ reviewer at fault.
Disagree. It's only the author's fault. We can't expect engineers to write code and find every last bug in other people's code especially when PRs can be thousands of lines to review.
A broken culture would be expecting engineers to find every last bug in other people's code and blaming them for bugs.
When issues happen, it is almost never just one person's fault.
Just trace the Whys:
1) Why did the bug happen? These lines of code 2) Why wasn't that caught by tests? The test was broken / didn't exist 3) How was the code submitted without a test? The reviewer didn't call it out.
Arguing about percentages doesn't matter. What matters is fixing the team process so this doesn't happen again. This means formally documented standards for test coverage and education and building team culture so people understand why it's important.
It's not "just the author's fault". That kind of blame game is toxic to teams.
That's the difference between "your code & my code" vs "our code".
In a good culture it's "our code" where everyone shares the responsibility for quality, performance, and functionality.
Our code review process involves a reviewer explicitly taking ownership of the PR, and I think it's a reasonable expectation with the right practices around it. A good reviewer will request a PR containing 1000s of lines be broken up without doing much more than skimming to make sure it isn't bulked up by generated code or test data or something benign like that.
2 replies →
Idk much about other industries what would this logic look like in aviation or pharma?
Google has a culture of small PRs. If your PR is too big (think more than 500 lines changed) you’ll actually not be able to get readability reviews on them - unless someone in your team does the readability review for you.
The broken culture is what you actually suggested and it has a name, the blame culture.
Let's play the blame game, I love you more Let's play the blame game for sure
He's obviously self-praising for a promotion or looking for another job. I don't think it should be taken seriously on the matter of the effectiveness of code-reviews
None of the two were at play when I wrote this. Try again.
1 reply →
> I also pushed for breaking large changes into smaller commits, because it is impossible to do a thorough review otherwise.
I've found this to be key for both giving and receiving feedback. Even small breaking commits are better in a lot of cases because they invite feedback in a way that 1000+ lines don't.
I miss Phabricator from my time at Meta so much, I made this to achieve a Phabricator-like stacked-commit experience via the git cli: https://pypi.org/project/stacksmith/
This looks really interesting. I've been using a similar tool called spr https://github.com/spacedentist/spr for the last six or so months at work. I really like the stacked diff/PR workflow. But spr has a lot of rough edges and im on the lookout for a better alternative.
Do you happen to know how your tool compares to spr? How production-ready is it?
1 reply →
Breaking down the size of the change is truly important, otherwise it's easy to miss things and to also disregard them as little details when wanting to avoid blocking the whole change on a "small" thing (which may only seem small because the PR is now huge)
The author describes how his code reviews that he gave others are successful from his own point of view.
But he does back it up with actual facts (as far as we can trust the author to tell the truth) - the feature that the author gave feedback on shipped without any issues. (The article actually doesn't say whether A was fixed-and-bug-free before B shipped, but it certainly sounds like B was less stressful to ship.)
It’s also a biased view. The author admit that the feature he was involved in took longer to ship initially. Depending on the environment this can be an anti-pattern; don’t we say “release early, release often”.
In the same vein; the author says that the other feature took several releases to be stable. Were the other release purely bug fixes or did that give the engineer a chance to get early feedback and incorporate that into the feature ?
It’s clear that the author prefers a slow and careful approach, and he judges “success” and “failure” by that metric. It sometimes is the right approach. It sometimes isn’t.
Yes, these were my reviews. But this is from the point of view of the reviewee. I’m telling the story of what this person felt and later told me.
Don't worry, he also asked his own reviewee, who said the reviews were helpful and in no way obnoxious.
I did not ask. This is feedback that the reviewee gave me years after we both left Google.
My statement still stands true regardless. Not worried.
Only somewhat related, but I'd pay decent money to have access to the whole Piper/CitC/Critique/Code Search stack. As much as I've tried to like it, I just don't really like Github's code review tool.
Github's code review tool is uniquely bad. Notably it presents every comment as blocking and requiring sign off - even a "Glad someone cleaned this up! <thumbs up emoji>" needs clearing before merge.
It also has some UX snafus that cause reviewers to write a number of comments and then forget to publish any of them, leading to a lot of interactions along the lines of "I thought you were going to review my PR?" "I already did?"
Requiring every comment to be resolved is not a standard part of GitHub’s code review system. That is something which your organization has gone out of its way to require.
1 reply →
Overall, I personally find the experience better than Gitlab's merge request UI/UX.
[dead]
Former Googler. I also miss Critique/Gerrit. I've tried a bunch of alternatives, and I like CodeApprove:
https://codeapprove.com/
It's great if you have a team that does code reviews. It works less well for reviewing contributions from external contributors on an open-source project,a as the contributor likely just wants to get their PR merged and doesn't want to learn a special reviewing tool.
No affiliation, just a happy customer.
"lookup table of similar technology and services to help ex-googlers survive the real world"
https://github.com/jhuangtw/xg2xg
Do you know if this works with Azure DevOps? I hate their UI. At this point I'd love to use Github. But for some reason the higher ups want us to be on Azure DevOps.
Shameless plug but since you asked ... CodeApprove (https://codeapprove.com) is probably the closest thing you can get to Critique on GitHub. It doesn't help with the Piper/CitC/Code Search parts though, and I agree those were excellent.
How does this handle force pushes and stacked PRs?
Shameless plug, but we built http://CodePeer.com - to bring Critique like features to everyone else. Take it for a spin if you like!
How does this handle force pushes and stacked PRs?
In my experience, mandatory code reviews seldom work very well. Usually it's either stamping PRs or you always run the risk of someone just blocking stuff in an unreasonable manner.
For the most part, code reviews should be optional - if you want to get a review from someone, tag them on your PR and ask. If someone you didn't tag spots something and your PR landed, you can always figure it out and still make a fix.
I will give an exception to maybe super fragile parts of the code but ideally you can refactor/build tests/do something else that doesn't require blocking code review to land changes.
I have trouble with code reviews because I can't run the code from the review GUI, so I usually look at the tests run in CI or pull and run it myself, if possible. Is this not a problem other people have? Is this a tooling problem?
By putting breakpoints in the code, and seeing what the changed lines do, I can compare the output line by line with my mental model of what should be happening, and this thoroughly helps me verify the changes.
I don't think I ever try to verify the correctness of the code in code review. That seems hopeless to me. The original developer should be assumed to produce working code, which is covered with tests and maybe will go through QA (if you have that). Of course there are occasional bugs you can spot, but the main goal is to review the architectural approach and code clarity. You can also note if something isn't properly covered with tests. If you managed to make any security, performance, or edge case handing suggestions, that's a nice bonus.
Maybe we're in a very different context, but to me if you can't understand what the code does without a debugger, then you ask the author to make the code clearer.
I would like to offer a review on this article. Naming people and projects single characters makes it difficult to follow the content.
They are not antagonistic in nature! Where did they get this idea?
Mandatory CR is mostly a way for mediocre programmers to feel like they're contributing IMHO.
I stopped reading after that opening paragraph. I don't know of anyone I take seriously who thinks that code reviews are bad practice or pure red tape.
I’ve never worked somewhere where mandatory PR reviews didn’t turn into mostly red tape.
The pressure to get work done faster in the long term always wins out over other concerns and you end up with largely surface level speed reviews that don’t catch much of anything. At best they tend to enforce taste.
In 20 years across many companies and thousands of PRs, I’ve never had a reviewer catch a single major bug (a bug that would have required declaring an incident) that would have otherwise gone out. I've pushed a few major bugs to production, but they always made it through review.
I’ve been doing this since well before everyone was using GitHub and requiring PR reviews for every merge. I haven’t noticed a significant uptick in software quality since the switch.
The cost is high enough that I’d like to see some hard evidence justifying it. It just seems to be something people who have never done any different take on faith.
> In 20 years across many companies and thousands of PRs, I’ve never had a reviewer catch a single major bug
Good thing reviews aren't just about catching bugs.
1 reply →
I know an idi*t who claimed, in a code review, that there was a memory leak just by looking at the code (turned out there wasn't). Clearly it was a bullying attempt to stop someone else's progress. Unfortunately it was successful because of people like the ones downvoting you.
Mandatory code review definitely creates red tape. Every place I've been with mandatory code review, I always see people "looking for a stamp".
At my current job, code review requirements are set on a per-folder basis, with many folders not requiring a review. People ask for a review because they want a review (or, sometimes, they dont. For example, I don't ask someone to review every quick one-liner of the systems I am an expert in).
Sure there's some subset of commits where all that makes sense is a stamp. On good teams it at least still ensures that two people agree that a stamp is appropriate. Knowing that you need to be able to convince someone it's good before it goes in is a good forcing function reducing sloppiness and laziness.
Then you are very lucky. I definitely have met those sorts. I’m even aware of teams that collectively push to the main brain under the promise that they’ll probably look at each other’s code later, maybe.
I saw no proof of the later review for all pushes.
Same.
> Code reviews have a bad rap: they are antagonistic in nature and, sometimes, pure red tape.
I wonder if folks know that this is a job? What are you gonna do, not do it? Cry at night because you forgot for the hundreth time to add token strings to translation files and not be hard-coded? Come on.
A few days ago there was an article on HN on how engineers abuse code reviews. It's just a tool, the outcome is different based on who's reviewing. If you think code review is intrinsically good then I'm glad I'm not working with you either
You would be surprised! I have encountered the attitude that code reviews are a waste of time. It's not common, and I have never seen this attitude "win" across a team/company but it definitely exists. Some engineers are just overconfident, they believe they could fix everything if everyone would just let them code.
from this thread "Mandatory CR is mostly a way for mediocre programmers to feel like they're contributing IMHO."
Thankfully I'm not working with you
[dead]
AI is pretty good at code reviews. For reference I use chatgpt and Gemini. It's very helpful.
An AI tool that could convert large scale changes into a set of small commits would be amazing.
Are you using AI for public contributions? Or a private repo? How does it deal with things like project conventions? e.g. "Currency should be represented using a Money class", "This method should use our utility class", etc. Do you upload the entire project's source code?
The review you get depends what questions you ask. For example: I wrote a class that wrapped a std::vector<T> as a private data member and it pointed out that it would be nice if I implemented support for accessing the iterators and the array subscripts. It made these remarks based on how I was using the object. I have uploaded an entire repo to Gemini (as a single file) and asked for broad and fine reviews. It's really quite good.
1 reply →