Comment by eieio

13 days ago

at my last job code review was done directly in your editor (with tooling to show you diffs as well).

What this meant was that instead of leaving nitpicky comments, people would just change things that were nitpicky but clear improvements. They'd only leave comments (which blocked release) for stuff that was interesting enough to discuss.

This was typically a big shock for new hires who were used to the "comment for every nitpick" system; I think it can feel insulting when someone changes your feature. But I quickly came to love it and can't imagine doing code review any other way now. It's so much faster!

I'm not sure how to tie this to AI code review tbh. Right now I don't think I'd trust a model's taste for when to change things and when to leave a comment. But maybe that'll change. I agree that if you automated away my taste for code it'd put me in a weird spot!

This is the workflow I've always dreamed of. In a lot of ways making a change which is then submitted as patch to their patch isn't really that different from submitting a comment to their patch. The workflow of doing that directly in editor is just wonderful.

If I had to pick, I actually think ONLY being able to submit "counter-patches" would be better than only being able to submit comments. Comments could just be actual programming language style comments submitted as code changes.

What if you have two people with different ideas of how to name a certain variable and they just flip the name back and forth every release?

I like this review method too though, and like that some pr review tools have a 'suggest changes' and 'apply changes' button now too

  • > What if you have two people with different ideas of how to name a certain variable and they just flip the name back and forth every release?

    Fire both. There is no amount of skill and productivity that can justify that amount of pettiness.

  • Typically in this system you encode obligations - e.g. "eieio should review, or at least be aware of, all changes made to this library." I think that means you're unlikely in practice to have a problem like that, which (unless the team is not functioning well) requires two people who care deeply about the variable name and don't know that someone else is changing it.

  • I think it's a good idea to have a style guide of sorts that you can point to when people sweat the small stuff.

  • >What if you have two people with different ideas of how to name a certain variable and they just flip the name back and forth every release?

    You fire both or at least one of them. Problem solved.

If minor mistakes are corrected without the PR author's input, do they ever learn to stop making those mistakes themselves? It seems like a system where you just never bother to learn, e.g., style conventions, because reviewers just apply edits as needed.

> What this meant was that instead of leaving nitpicky comments, people would just change things that were nitpicky but clear improvements. They'd only leave comments (which blocked release) for stuff that was interesting enough to discuss.

This is my dream; have only had a team with little enough ego to actually achieve it once for an unfortunately short period of time. If it's something that there's a 99% chance the other person is going to say 'oh yeah, duh' or 'sure, whatever' then it's just wasting both of your time to not just do it.

That said, I've had people get upset over merging their changes for them after a LGTM approval when I also find letting it sit to be a meaningless waste of time.

Interesting approach. I think it could have the reviewers to be more serious about their feedback. Comments are a bit too casual and may contain more "unconstructive" information.

I just this morning had someone "nitpick" on a PR I made and ask for a change that would have broken the code.

If the reviewer can make changes without someone reviewing their change, it's just waiting to blow up in. your face.

  • Yes, in the system I'm describing if a reviewer changed your code, you reviewed their change.

That sounds great. Was that proprietary tooling? I'd be interested in some such thing.

  • The tool (iron) isn't open source, but there are a bunch of public talks and blogs about how it works, many of which are linked from the github repo[1].

    It used to be "open source" in that some of the code was available, but afaik it wasn't ever possible to actually run it externally because of how tightly it integrated with other internal systems.

    [1] https://github.com/janestreet/iron

  • If I understood correctly, the same can be done on VS Code with the github plugins (for github PRs)

    It's pretty straightforward: you checkout a PR, move around, and either make some edits (that you can commit and push to the feature branch) or add comments.

    • Good to know about its existence. I think I'll have to do my own sleuthing though, since I'm a (neo)vim user who dislikes GitHub.

  • Yeah, it's called git: make your own branch from the PR branch, commit and push the nitpick change, tell the author, and they can cherry-pick it if they approve.

    Gitlab has this functionality right in the web UI. Reviewers can suggest changes, and if the PR author approves, a commit is created with the suggested change. One issue with this flow it that's it doesn't run any tests on the change before it's actually in the PR branch, so... Really best for typos and other tiny changes.

    Alternatively you actually, you know, _collaborate_ with the PR author, work it out, run tests locally and/or on another pushed branch, and someone then pushes a change directly to the PR.

    The complaints about nitpicks slowing things down too much or breaking things sound like solo-hero devs who assume their god-like PRs should be effectively auto-approved because how could their code even contain problems... No wonder they love working with "Dr Flattery the Always Wrong Bot".

    *(Hilarious name borrowed from Angela Collier)

    • I think you misunderstood the tooling I was asking about. This is what was mentioned:

      > at my last job code review was done directly in your editor (with tooling to show you diffs as well).

      That's not covered by git itself. And it's not covered by Gitlab, GitHub, or any other web-based forge.

      > Alternatively you actually, you know, _collaborate_ with the PR author, work it out, run tests locally and/or on another pushed branch, and someone then pushes a change directly to the PR.

      Of course you should collaborate with the author. This tooling is a specific means to do that. You yourself are of course free to not like such tooling for whatever reason.

      > The complaints about nitpicks slowing things down too much or breaking things sound like solo-hero devs who assume their god-like PRs should be effectively auto-approved because how could their code even contain problems... No wonder they love working with "Dr Flattery the Always Wrong Bot".

      Did you maybe respond to the wrong person? I'm not sure how that relates to my comment at all.