← Back to context

Comment by 3036e4

3 days ago

What bothered me for a long time with code reviews is that almost all useful things they catch (i.e. not nit-picking about subjective minor things that doesn't really matter) are much too late in the process. Not rarely the only (if any) useful outcome of a review is that everything has to be done from scratch in a different ways (completely new design) or that it is abandoned since it turns out it should never have been done at all.

It always seems as if the code review is the only time when all stakeholders really gets involved and starts thinking about a change. There may be some discussion earlier on in a jira ticket or meeting, and with some luck someone even wrote a design spec, but there will still often be someone from a different team or distant part of the organization that only hears about the change when they see the code review. This includes me. I often only notice that some other team implemented something stupid because I suddenly get a notification that someone posted a code review for some part of the code that I watch for changes.

Not that I know how to fix that. You can't have everyone in the entire company spend time looking at every possible thing that might be developed in the near future. Or can you? I don't know. That doesn't seem to ever happen anyway. At university in the 1990's in a course about development processes there wasn't only code reviews but also design reviews, and that isn't something I ever encountered in the wild (in any formal sense) but I don't know if even a design review process would be able to catch all the things you would want to catch BEFORE starting to implement something.

> and that isn't something I ever encountered in the wild (in any formal sense)

Because in the software engineering world there is very little engineering involved.

That being said, I also think that the industry is unwilling to accept the slowliness of the proper engineering process for various reasons, including non criticality of most software and the possibility to amend bugs and errors on the fly.

Other engineering fields enjoy no such luxuries, the bridge either holds the train or it doesn't, you either nailed the manufacturing plant or there's little room for fixing, the plane's engine either works or not

Different stakes and patching opportunities lend to different practices.

  • There is plenty on large scale enterprise projects, but than whole that stuff is looked down by "real developers".

    Also in many countries, to one call themselves Software Engineer, they actually have to hold a proper degree, from a certified university or professional college, validated by the countrie's engineering order.

    Because naturally 5 year (or 3 per country) degree in Software Engineering is the same a six weeks bootcamp.

    • I never finished my degree, but I believe I'm a very good developer (my employere agree). In my times most good programmers were self-taught.

      I don't mind (hypothetically) not being allowed to call myself "engineer", but I do mind false dichotomy of "5 year course" vs "six week bootcamp". In the IT world it's entirely possibly to learn everything yourself and learn it better than one-fits-all course ever could.

      2 replies →

  • It still is engineering you only mistake design phase.

    Writing code is the design phase.

    You don’t need design phase for doing design.

    Will drop link to relevant video later.

    • I see there has been a “spirited discussion” on this. We can get fairly emotionally invested into our approaches.

      In my experience (and I have quite a bit of it, in some fairly significant contexts), “It Depends” is really where it’s at. I’ve learned to take an “heuristic” approach to software development.

      I think of what I do as “engineering,” but not because of particular practices or educational credentials. Rather, it has to do with the Discipline and Structure of my approach, and a laser focus on the end result.

      I have learned that things don’t have to be “set in stone,” but can be flexed and reshaped, to fit a particular context and development goal, and that goals can shift, as the project progresses.

      When I have worked in large, multidisciplinary teams (like supporting hardware platforms), the project often looked a lot more “waterfall,” than when I have worked in very small teams (or alone), on pure software products. I’ve also seen small projects killed by overstructure, and large projects, killed, by too much flexibility. I’ve learned to be very skeptical of “hard and fast” rules that are applied everywhere.

      Nowadays, I tend to work alone, or on small teams, achieving modest goals. My work is very flexible, and I often start coding early, with an extremely vague upfront design. Having something on the breadboard can make all the difference.

      I’ve learned that everything that I write down, “ossifies” the process (which isn’t always a bad thing), so I avoid writing stuff down, if possible. It still needs to be tracked, though, so the structure of my code becomes the record.

      Communication overhead is a big deal. Everything I have to tell someone else, or that they need to tell me, adds rigidity and overhead. In many cases, it can’t be avoided, but we can figure out ways to reduce the burden of this crucial component.

      It’s complicated, but then, if it were easy, everyone would be doing it.

    • Googler, but opinions are my own.

      I disagree. The design phase of a substantial change should be done beforehand with the help of a design doc. That forces you to put in writing (and in a way that is understandable by others) what you are envisioning. This exercise is really helpful in forcing you to think about alternatives, pitfalls, pros & cons, ... . This way, once stakeholders (your TL, other team members) agreed then the reviews related to that change become only code related (style, use this standard library function that does it, ... ) but the core idea is there.

      2 replies →

    • I also read this series of blog posts recently where the author, Hillel Wayne, talked to several "traditional" engineers that had made the switch to software. He came to a similar conclusion and while I was previously on the fence of how much of what software developers do could be considered engineering, it convinced me that software engineer is a valid title and that what we do is engineering. First post here: https://www.hillelwayne.com/post/are-we-really-engineers/

      21 replies →

    • > Writing code is the design phase.

      No, it really isn't. I don't know which amateur operation you've been involved with, but that is really not how things work in the real world.

      In companies that are not entirely dysfunctional, each significant change to the system's involve a design phase, which often includes reviews from stakeholders and involved parties such as security reviews and data protection reviews. These tend to happen before any code is even written. This doesn't rule out spikes, but their role is to verify and validate requirements and approaches, and allow new requirements to emerge to provide feedback to the actual design process.

      The only place where cowboy coding has a place is in small refactoring, features and code fixes.

      12 replies →

  • > Because in the software engineering world there is very little engineering involved.

    I can count on one hand the number of times I've been given the time to do a planning period for something less than a "major" feature in the past few years. Oddly, the only time I was able to push good QA, testing, and development practices was at an engineering firm.

I work in a small team where we are essentially 4-6 core developers. When I develop a feature I usually talk about it with my coworkers once I made a rough draft in my head how I'd approach it. They do the same so our code reviews are mostly only the minor code smells etc. but we usually decide on the architecture together (2-3 people usually).

I find this to be one of the most important things in our team. Once people don't agree on code it all kinda goes downhill with nobody wanting to interact with code they didn't write for various reasons.

In bigger orgs I believe it's still doable this way as long as responsibilities are shared properly and it's not just 4 guys who know it all and 40 others depend on them.

  • I strongly second this. In my own experience of about 30 years, I have seen this method to work almost always.

> It always seems as if the code review is the only time when all stakeholders really gets involved and starts thinking about a change.

That is a problem with your organization, not with Git or any version control system. PRs are orthogonal to it.

If you drop by a PR without being aware of the ticket that made the PR happen and the whole discussion and decision process that led to the creation of said tickets, you are out of the loop.

Your complain is like a book publisher complaining that the printing process is flawed because seeing the printed book coming out of the production line is the only time when all stakeholders really get involved. Only if you work in a dysfunctional company.

  • I saw this in many places, so I read that original statement like a complaint about a widespread problem, not an exception in one company.

    Sometimes is not even about a PR, it is about an entire project. I always do reviews (design and code, separate stages) for projects where code is almost complete when people come for design reviews and by the time we get to code reviews it is usually too late to fix problems other than showstoppers. I worked in small companies, huge companies (over 100k employees), some are better, most are bad, in my experience. YMMV, of course.

> Not that I know how to fix that. You can't have everyone in the entire company spend time looking at every possible thing that might be developed in the near future. Or can you?

You don't need to. I've seen this generally work with some mix of the following:

1. Try to decouple systems so that it's less likely for someone in a part of the org to make changes that negatively impact someone in a more distant part of the org.

2. Some design review process: can be formal "you will write a design doc, it will be reviewed and formally approved in a design committee" if you care more about integrity and less about speed, or can be "write a quick RFC document and share it to the relevant team(s)".

3. Some group of people that have broad context on the system/code-base (usually more senior or tenured engineers). Again, can be formal: "here is the design review committee" or less formal: "run it by these set of folks who know there stuff". If done well, I'd say you can get pretty broad coverage from a group like this. Definitely not "everyone in the entire company". That group can also redirect or pull others in.

4. Accept that the process will be a bit lossy. Not just because you may miss a reviewer, but also, because sometimes once you start implementing the reality of implementation is different than what people expect. You can design the process for this by encouraging POC or draft implementations or spikes, and set expectations that not all code is expected to make it into production (any creative process includes drafts, rewrites, etc that may not be part of the final form, but help explore the best final form).

I've basically seen this work pretty well at company sizes from 5 engineers all the way up to thousands.

Where I work we tend to write RFCs for fundamental design decisions. Deciding what counts as a "fundamental design decision" is sometimes self-moderated in the moment but we also account for it when making long term plans. For example when initially creating epics in Jira we might find it hard to flesh out as we don't really know how we're going to approach it, so we just start it off with a task to write an RFC.

These can be written either for just our team or for the eyes of all other software teams. In the latter case we put these forward as RFCs for discussion in a fortnightly meeting, which is announced well in advance so people can read them, leave comments beforehand, and only need to attend the meeting if there's an RFC of interest to them up for discussion.

This has gone pretty well for us! It can feel like a pain to write some of these, and at times I think we overuse them somewhat, but I much prefer our approach to any other place I've worked where we didn't have any sort of collaborative design process in place at all.

  • I view this process like this: code review is a communication tool: you can discuss concrete decisions vs hand waving and explaining in the conceptual space, which of course has its place, but is limited.

    But writing the whole working code just to discuss some APIs is too much and will require extra work to change if problems are surfaced on review.

    So a design document is something in the middle: it should draw a line where the picture of the planned change is as clear as possible and can be communicated with shareholders.

    Other possible middle grounds include PRs that don’t pass all tests or that don’t even build at all. You just have to choose the most appropriate sequence of communication tools to come to agreements in the team and come to a point where the team is on the same page on all the decisions and how the final picture looks.

I share your feelings.

Regarding design reviews, we used to have them at my current job. However we stopped doing both formal design documents and design reviews in favor of prototyping and iterative design.

The issue with the design phase is that we often failed to account for some important details. We spent considerable time discussing things and, when implementing, realized that we omitted some important detail or insight. But since we already invested that much time in the design phase, it was tempting to take shortcuts.

What's more, design reviews were not conducted by the whole team, since it would be counter-productive to have 10-more people in the same room. So we'd still discover things during code reviews.

And then not everyone is good at/motivated to producing good design documents.

In the end, I believe that any development team above 5 people is bound to encounter these kinds of inefficiencies. The ideal setup is to put 5 people in the same room with the PO and close to a few key users.

  • > The ideal setup is to put 5 people in the same room with the PO and close to a few key users.

    (I suspect you are aware, but just in case this is new to you.) This is essentially the core of Extreme Programming.

    • What team size does XP recommends, if any ?

      It seems like the standard around me is between 8 to 12 people. This is too many in my opinion.

      I believe this is because management is unknowingly aiming for the biggest team the does not completely halts instead of seeking a team that delivers the most bang for the buck.

      2 replies →

One way to fix it: pair programming. You're getting feedback in real time as you write the code.

Unfortunately, the conditions where it works well can be difficult to set up. You need people who are into it and have similar schedules. And you don't want two people waiting for tests to run.

  • Anyone actually done long-term pair programming and lived to tell the tale? Is it real, or just a utopian fantasy?

    • It's not for everyone. Some people have excellent reasons why it isn't workable for them. Others have had terrible experiences. It takes a great deal of practice to be a good pair and, if you don't start by working with an experienced pair, your memories of pairing are unlikely to be fond.

      However.

      I paired full-time, all day, at Pivotal, for 5 years. It was incredible. Truly amazing. The only time in my career when I really thrived. I miss it badly.

  • It's crazy that we go out of our way to design processes (code review, design review) to avoid actually just... working together? And then we design organizations that optimize for those processes instead of optimizing for collaboration.

With AI based coding (no, i won't use "Vibe coding", thank you) this workflow improves a lot. Instead of jumping straight into code, I have my engineers write a Notion doc that describes what needs to be built. Think of it like an LLD, but really it’s a prompt for Claude-code. This forces them to think through the problem at a low level, and they share the doc with me before sending it to Claude — so I get to review early in the process. Once we finalize this "LLD" or "low-level-prompt", they hand it to Claude. The next time I see the work is in a GitHub PR. At that point, we rarely have to throw everything away and start from scratch.

Seems like your organization is lacking structure and communication.

Where I work, the structure is such that most parts of the codebase have a team that is responsible for it and does the vast majority of changes there. If any "outsider" plans a change, they come talk to the team and coordinates.

And we also have strong intra-team communication. It's clear who is working on what and we have design reviews to agree on the "how" within the team.

It's rare that what you describe happens. 95% of the code reviews I do are without comments or only with minor suggestions for improvement. Mainly because we have developed a culture of talking to each other about major things beforehand and writing the code is really just the last step in the process. We also have developed a somewhat consistent style within the teams. Not necessarily across the teams, but that's ok.

TL;DR: It's certainly possible to do things better that what you are experiencing. It's a matter of structure, communication and culture.

This can be used in any process where the result is only judged at the end.

The solution here may be to add a midterm check. I think this is what you mean by a "design review."

In my experience, there are some rules that need to be followed for it to work.

- Keep the number of stakeholders involved in all decisions, including PR, as small as possible.

- Everyone involved should take part in this check. That way, no one will be surprised by the results.

- This check should have been documented, like in the ticket.

This can be used in any process where the result is only judged at the end. The solution here may be to add a midterm check. I think this is what you mean by a "design review." In my experience, there are some rules that need to be followed for it to work. We should keep the number of stakeholders involved in all decisions, including PR, as small as possible. Everyone involved should take part in this mid-term check. That way, no one will be surprised by the results. This check should have been documented, like in the ticket.

When and how to do this check and how to handle disagreements depend on the task, culture, and personalities.

  • We should do something similar with AI-coding.

    If you don't have a documented mid-term check, vibe-coded PR might not be what you expected.

You can get the same thing if you do user interface testing after you've built the thing. A design system can help there - at the very least, the change can feed back into the next revision of the playbook.

Even if you can't fix it this time, hopefully you've taught someone a better pattern. The direction of travel should still be positive.

On personal projects I've used architectural decision records, but I've never tried them with a team.