Comment by nirvdrum

2 months ago

If you’re just committing for your own sake, that workflow sounds productive. I’ve been asked to review PRs with 20+ commits with a “wip” or “.” commit message with the argument: “it’ll be squash merged, so who cares!”. I’m sure that works well for the author, but it’s not great for the reviewer. Breaking change sets up into smaller logical chunks really helps with comprehension. I’m not generally a fan of people being cavalier with my time so they can save their own.

For my part, I find the “local history” feature of the JetBrains IDEs gives me automatic checkpoints I can roll back to without needing to involve git. On my Linux machines I layer in ZFS snapshots (Time Machine probably works just as well for Macs). This gives me the confidence to work throughout the day without needing to compulsively commit. These have the added advantage of tracking files I haven’t yet added to the git repo.

There are two halves here. Up until the PR is open, the author should feel free to have 20+ "wip" commits. Or in my case "checkpoint". However, it is also up to the author to curate their commits before pushing it and opening the PR.

So when I open a Pr, I'll have a branch with a gajillion useless commits, and then curate them down to a logical set of commits with appropriate commit messages. Usually this is a single commit, but if I want to highlight some specific pieces as being separable for a reviewer, it'll be multiple commits.

The key point here is that none of those commits exist until just before I make my final push prior to a PR.

  • I clean up commits locally as well. But, I really only commit when I think I have something working and then collapse any lint or code formatting commits from there. Sometimes I need to check another branch and am too lazy to set up worktrees, so I may create a checkpoint commit and name it a way that reminds me to do a `git reset HEAD^` and resume working from there.

    But, if you're really worried about losing 15 minutes of work, I think we have better tools at our disposal, including those that will clean up after themselves over time. Now that I've been using ZFS with automatic snapshots, I feel hamstrung working on any Linux system just using ext4 without LVM. I'm aware this isn't a common setup, but I wish it were. It's amazing how liberating it is to edit code, update a config file, install a new package, etc. are when you know you can roll back the entire system with one simple command (or, restore a single file if you need that granularity). And it works for files you haven't yet added to the git repo.

    I guess my point is: I think we have better tools than git for automatic backups and I believe there's a lot of opportunity in developer tooling to help guard against common failure scenarios.

    • I don't commit as a backup. I commit for other reasons.

      Most common is I'm switching branches. Example use case: I'm working locally, and a colleague has a PR open. I like to check out their branch when reviewing as then I can interact with their code in my IDE, try running it in ways they may not have thought of, etc.

      Another common reason I switch branches is that sometimes I want to try my code on another machine. Maybe I'm changing laptops. Maybe I want to try the code on a different machine for some reason. Whatever. So I'll push a WIP branch with no intention of it passing any sort of CI/CD just so I can check it out on the other machine.

      The throughline here is that these are moments where the current state of my branch is in no shape, way, or form intended as an actual valid state. It just whatever state my code happened to be in before I need to save it.

Why do you care about the history of a branch? Just look at the diff. Caring about the history of a branch is weird, I think your approach is just not compatible with how people work.

  • A well laid out history of logical changes makes reviewing complicated change sets easier. Rather than one giant wall of changes, you see a series of independent, self contained, changes that can be reviewed on their own.

    Having 25 meaningless “wip” commits does not help with that. It’s fine when something is indeed a work in progress. But once it’s ready for review it should be presented as a series of cleaned up changes.

    If it is indeed one giant ball of mud, then it should be presented as such. But more often than not, that just shows a lack of discipline on the part of the creator. Variable renames, whitespace changes, and other cosmetic things can be skipped over to focus on the meat of the PR.

    From my own experience, people who work in open source and have been on the review side of large PRs understand this the best.

    Really the goal is to make things as easy as possible for the reviewer. The simpler the reviews process, the less reviewer time you’re wasting.

    • > A well laid out history of logical changes makes reviewing complicated change sets easier.

      I've been on a maintenance team for years and it's also been a massive help here, in our svn repos where squashing isn't possible. Those intermediate commits with good messages are the only context you get years down the line when the original developers are gone or don't remember reasons for something, and have been a massive help so many times.

      I'm fine with manual squashing to clean up those WIP commits, but a blind squash-merge should never be done. It throws away too much for no good reason.

      For one quick example, code linting/formatting should always be a separate commit. A couple times I've seen those introduce bugs, and since it wasn't squashed it was trivial to see what should have happened.

      3 replies →

    • > A well laid out history of logical changes makes reviewing complicated change sets easier. Rather than one giant wall of changes, you see a series of independent, self contained, changes that can be reviewed on their own.

      But this would require hand curation? No development proceeds that way, or if it does then I would question whether the person is spending 80% of their day curating PRs unnecessarily.

      I think you must be kind of senior and you can get away with just insisting that other people be less efficient and work in a weird way so you can feel more comfortable?

      6 replies →

  • On the contrary, it seems to me that it is your approach which is incompatible with others. I'm not the same person you were replying to but I want the history of a branch to be coherent, not a hot mess of meaningless commits. I do my best to maintain my branches such that they can be merged without squashing, that way it reflects the actual history of how the code was written.

  • > Why do you care about the history of a branch?

    Presumably, a branch is a logical segment of work. Otherwise, just push directly master/trunk/HEAD. It's what people did for a long time with CVS and arguably worked to some extent. Using merge commits is pretty common and, as such, that branch will get merged into the trunk. Being able to understand that branch in isolation is something I've found helpful in understanding the software as a whole.

    > Caring about the history of a branch is weird, I think your approach is just not compatible with how people work.

    I get you disagree with me, but you could be less dismissive about it. Work however you want -- I'm certainly not stopping you. I just don't your productivity to come at the expense of mine. And, I offered up other potential (and IMHO, superior) solutions from both developer and system tools.

    I suppose what type of project you're working on matters. The "treat git like a versioned zip file" using squashed merges works reasonably well for SaaS applications because you rarely need to roll anything back. However, I've found a logically structured history has been indispensable when working on long-lived projects, particularly in open source. It's how I'm able to dig into a 25 year old OSS tool and be reasonably productive with.

    To the point I think you're making: sure, I care what changed, and I can do that with `diff`. But, more often if I'm looking at SCM history I'm trying to learn why a change was made. Some of that can be inferred by seeing what other changes were made at the same time. That context can be explicitly provided with commit messages that explain why a change was made.

    Calling it incompatible with how people work is a pretty bold claim, given the practice of squash merging loads of mini commits is a pretty recent development. Maybe that's how your team works and if it works for you, great. But, having logically separate commits isn't some niche development practice. Optimizing for writes could be useful for a startup. A lot of real world software requires being easy to maintain and a good SCM history shines there.

    All of that is rather orthogonal to the point I was trying to add to the discussion. We have better tools at our disposal than running `git commit` every 15 minutes.