This was a really interesting read. I'd highly recommend it for anybody who's setting up (or currently maintains) a pre-commit workflow for their developers.
I want to add one other note: in any large organization, some developers will use tools in ways nobody can predict. This includes Git. Don't try to force any particular workflow, including mandatory or automatically-enabled hooks.
Instead, put what you want in an optional pre-push hook and also put it into an early CI/CD step for your pull request checker. You'll get the same end result but your fussiest developers will be happier.
I can second that. If there are multiple commits: https://github.com/tummychow/git-absorb is handy to add formatting changes into the right commit after commits already happened.
There's a weird thing happening on my current project. Sometimes I merge main into my branch and it fails. What fails is the pre-commit hook on the merge commit. Changes in main fail the linting checks in the pre-commit hook. But they still ended up in main, somehow. So the checks on the PR are apparently not as strict as the checks on the pre-commit hook. As a result, many developers have gotten used to committing with `--no-verify`, at which point, what is even the point of a pre-commit hook?
And sometimes I just want to commit work in progress so I can more easily backtrack my changes. These checks are better on pre-push, and definitely should be on the PR pipeline, otherwise they can and will be skipped.
Anyway, thanks for giving me some ammo to make this case.
For the sake of argument, let's say you have a check that caps the number of lines per file and that both you and main added lines in the same file. It's not too weird if that check fails only after merge, right?
One benign example of something that can break after merge even if each branch is individually passing pre-merge. In less benign cases it will your branch merged to main and actual bugs in the code.
One reason to not allow "unclean merges" and enforced incoming branches to be rebased up-to-date to be mergable to the main branch.
You probably want to run the checks on each commit to main in CI and not rely on them being consistently run by contributors.
You do you but I find rebasing my branch on main instead of merging makes me scratch mybhead way less.
Why not take the best of both worlds?
Use pre-commit hooks for client-side validation, and run the same checks in CI as well.
I’ve been using this setup for years without any issues.
One key requirement in my setup is that every hook is hermetic and idempotent. I don’t use Rust in production, so I can’t comment on it in depth, but for most other languages—from clang-format to swift-format—I always download precompiled binaries from trusted sources (for example, the team’s S3 storage). This ensures that the tools run in a controlled environment and consistently produce the same results.
> I want to add one other note: in any large organization, some developers will use tools in ways nobody can predict. This includes Git. Don't try to force any particular workflow, including mandatory or automatically-enabled hooks.
you will save your org a lot of pain if you do force it, same as when you do force a formatting style rather than letting anyone do what they please.
You can discuss to change it if some parts don't work but consistency lowers the failures, every time.
Enforcement should live in CI. Into people's dev environments, you put opt-in "enablement" that makes work easier in most cases, and gets out of the way otherwise.
It's a good thing you can't force it, because `git commit -n` exists. (And besides, management of the `.git/hooks` directory is done locally. You can always just wipe that directory of any noxious hooks.)
I can accept (but still often skip, with `git push -n`) a time-consuming pre-push hook, but a time-consuming and flaky pre-commit hook is totally unacceptable to my workflows and I will always find a way to work around it. Like everyone else is saying, if you want to enforce some rule on the codebase then do it in CI and block merges on it.
I'm the type of developer who always have a completely different way of working. I hate pre-commit hooks, and agree that pre-push + early step is CI is the right thing to do.
You shouldn't be relying on hooks to maintain the integrity of your codebase, and I'm not seeing anything here that makes me want to avoid `pre-commit` (or, more literally, the https://pre-commit.com/ tool). CI must be the source of truth for whether a commit is acceptable.
If you're using `pre-commit` the tool, not merely the hook, you can also use something like https://github.com/andrewaylett/pre-commit-action to run the tool in CI. It's a really good way to share check definitions between local development and CI, meaning you've shifted your checks to earlier in the pipeline.
I use Jujutsu day-to-day, which doesn't even support pre-commit hooks. But the tooling is still really useful, and making sure we run it in CI means that we're not relying on every developer having the hooks set up. And I have JJ aliases that help pre-commit be really useful in a JJ workflow: https://github.com/andrewaylett/dotfiles/blob/7a79cf166d1e7b...
pre-commit is a convenience for the developer to gain confidence that pre-flight checks in CI will pass. I've found trying to make them automatic just leads to pain when they interact with any non-trivial git feature and don't handle edge cases.
I've been much much happier just having a little project specific script I run when I want to do formatting/linting.
pre-commit is just a bad way to do this. 99.9% of my commits won't pass CI. I don't care. I run `git wip` which is an alias for `git commit -am "WIP"` about every 15 minutes during the day. Whenever things are in a running state. I often go back through this history on my branch to undo changes or revisit decisions, especially during refactors, especially when leveraging AI. When the most work you can lose is about 15 minutes you stop looking before you leap. Sometimes a hunch pays off and you finish a very large task in a fraction of the time you might have spent if you were ploddingly careful. Very often a hunch doesn't pay off and you have to go recover stuff from your git history, which is very easy and not hard at all once you build that muscle. The cost/benefit isn't even close, it makes me easily 2x faster when refactoring code or adding a feature to existing code, probably more. It is 'free' for greenfield work, neither helping nor really hurting. At the end the entire branch is squashed down to one commit anyway, so why would you ever not want to have free checkpoints all the time?
As I'm saying this, I'm realizing I should just wire up Emacs to call `git add {file_being_saved} && git commit -am "autocheckpoint"` every time I save a file. (I will have to figure out how to check if I'm in the middle of some operation like a merge or rebase to not mess those up.)
I'm perfectly happy to have the CI fail if I forget to run the CI locally, which is rare but does happen. In that case I lose 5 minutes or whatever because I have to go find the branch and fix the CI failure and re-push it. The flip side of that is I rarely lose hours of work, or end up painting myself in a corner because commit is too expensive and slows me down and I'm subconsciously avoiding it.
Not everyone in my team wires up their pre-commit hook to run the pre-commit tool. I use JJ, so I don't even have a pre-commit hook to wire up. But the tool is useful.
The key thing (that several folk have pointed out) is that CI runs the canonical checks. Using something like pre-commit (the tool) makes it easier to at least vaguely standardise making sure that you can run the same checks that CI will run. Having it run from the pre-commit hook fits nicely into many workflows, my own pre-JJ workflow included.
I've worked in several projects where running the tests locally automatically install pre-commit hooks and I've wanted to commit warcrimes because of it.
At my last job, we ran all of our tests, linting/formatting, etc. through pre-commit hooks. It was apparently a historical relic of a time where five developers wanted to push directly to master without having to configure CI.
To bring up jujutsu, `jj fix` (https://docs.jj-vcs.dev/latest/cli-reference/#jj-fix) is a more refined way of ensuring formatting in commits. It runs a formatting command with the diff in stdin and uses the results printed to stdout. It can simplify merges and rebases history to ensure all your commits remain formatted (so if you enable a new formatting option, it can remove the need for a special format/style fix commit in your mutable set). Hard to go back to pre-commit hooks after using jj fix (also hard to use git after using jj ;) ).
The downside currently (although I've been assured this will be fixed one day) is that it doesn't support running static analysis over each commit you want to fix.
My git rebase workflow often involves running `git rebase -x "cargo clippy -- --deny=warnings"`. This needs a full checkout to work and not just a single file input
Yeah, to add some context for people reading this, jj fix works best for edits local to the diff, and it’s meant for edits mostly. With some trickery you could run some analysis, but it’s not what jj fix is meant for right now.
What I really want is some way within jj to keep track of which commits have been checked and which are currently failing, so I can template it into log lines.
I’ve seen similar issues once hooks start doing more than fast checks. The moment they become stateful or depend on external context, they stop being guardrails and start being a source of friction. In practice, keeping them boring and deterministic seems to matter more than catching everything early.
A workflow that works well is one that takes the better ideas from Meta's "hg"+"arcanist"+edenfs+"phabricator" diff and land strategy. Git, by itself, is too low-level for shared, mostly single-source-of-truth yet distributed dev.
Make test cases all green locally before pushing, but not in a way that interferes with pushing code and they shouldn't be tied to a particular (D)VCS. Allow uploading all of the separate proposed PRs you want in a proposed "for review" state. After a PR is signed-off and sent for merging, it goes into a linearizing single source of truth backed by an automated testing/smoke testing process before they land "auto-fast-forwarded" in a mostly uncontrolled manner that doesn't allow editing the history directly. Standardization and simplicity are good, and so is requiring peer review of code before it's accepted for existing, production, big systems.
Disallow editing trunk/main/master and whenever there's merge conflict between PRs, manual rebasing of one or the other is required. Not a huge deal.
Also, have structured OWNERS files that include people and/or distribution list(s) of people who own/support stuff. Furthermore, have a USERS file that keeps lists of people who would be affected by restarting/interrupting/changing a particular codebase/service for notification purposes too. In general, monorepo and allowing submitting code for any area by anyone are roughly good approaches.
On any project where pre-commit hooks are used, the first thing I do is disable them. What I do when the code is on my side of the line isn't your business.
I agree on the other side of the fence! I quite like precommit when I use it, but I've never imposed it on any of my projects. Some people use commits sporadically then squash down- I really don't feel comfortable breaking someone's personal workflow to that degree.
I almost always have a "this cicd must pass to merge" job, that includes linting etc, and then use squash commits exclusively when merging.
Yes, big fan of enforcing standards via CI/CD workflows. Any rules a group wishes to be met should be in there. As long as someone meets those rules by the time they open a PR, I don't care how they get there.
Sure, if the warning levels are poorly tuned I might configure my LSP to ignore everything and loosen the enforcement in the build steps until I'm ready to self review. Something I can't stand with Typescript for example is when the local development server has as strict rules as the production builds. There's no good reason to completely block doing anything useful whatsoever just because of an unused variable, unreachable code, or because a test that is never going to get committed dared to have an 'any' type.
Not if I push my branch it to origin. But until I do that, it's none of your concern if I do or don't. Once it gets thrown over the wall to my colleagues and/or the general public, that's the point where I should be conforming to repo norms. Not before then.
Your hook can't observe a simple env var, if you are stepping off the happy path of your workflow? E.g. `GIT_HOOK_BYEBYE` = early return in hook script. Article seems a little dramatic. If you write a pre-commit hook that is a pain in your own arse, how does that make them fundamentally broken?
I run this on every commit. Sure, I have probably gone overboard, but it has prevented problems, and I may be too picky about not having a broken HEAD. But if you want to contribute, you don't have to run any pre commit. It'll run on every PR too.
I don't send myself PRs, so this works for me.
Of course I always welcome suggestions and critique on how to improve my workflow.
And least nothing is stateful (well, it caches build artefacts), and aside from "cargo deny" no external deps.
Yep, all that and they’re also annoying. Version control tools are not supposed to argue - do what you’re told. If I messed up, the branch build will tell me.
Is that the difference between forced pre commits vs opt in? I don't want to commit something that doesn't build. If nothing else it makes future bisects annoying.
But if I intend to squash and merge, then who cares about intermediate state.
> I don't want to commit something that doesn't build.
This is a really interesting perspective. Personally I commit code that will fail the build multiple times per day. I only care that something builds at the point it gets merged to master.
The first step of which I usually have as pre-commit run --all-files (using the third-party tool of the same name as git feature) - so running locally automatically on changed files just gives me an early warning. It can be nice to run unit tests locally too, btw.
I’ve used pre-commit the tool and now prek for the better part of a decade and never had these issues even using rebase flows exclusively. This seems like an operator error.
Without pre-commit you should squash on merge. That way main is atomic. But squashing loses granularity. A single merge commit might both refactor a function and implement a feature. You could split it into two pull requests, but maybe the feature depends on the refactor. Now your feature branch needs to cherry pick from the refactor branch. What if the refactor branch makes further changes prior to being merged? Now your feature branch has conflicts.
With pre-commit you can disable squashing on merge. You'll have a refactor commit and a feature commit. Changes to the refactor can be --fixup followed by rebase. I find this much easier than juggling two pull requests.
I accept that this is only viable with buy-in from everyone on the team, and I wouldn't advise it for teams over 10 people. But for small teams, I love pre-commit and trunk-based development.
I don’t really like pre-commit hooks, but I do think that git-secrets is a very useful one since once a secret is in the commit history, it’s a hassle (though not impossible) to remove it. All other issues can and should be caught early as an optionally blocking step in a CI/CD pipeline build.
Running on the working tree is mostly okay - just `exit 1` if changes were made and allow the user to stage+commit new changes. It isn't perfect but it doesn't require checking out a new tree.
What if I've already fixed the format issue (but not staged it). The pre-commit hook will pass, but it's not doing what the author intended (preventing unformated code from being committed).
What if I've only staged one part of a file, but the pre-commit hook fails on the unstaged portions, which should be fine since I'm not commiting or pushing those changes.
Home Assistant takes this one step farther. There is a pre-run hook that goes out its way to make it hard to run an “integration” that doesn’t meet its quality standard. I get that they don’t want to get PRs for integrations that don’t check the checklist, but as someone writing an integration (which I’m currently doing, for better or for worse), I want to run my own incomplete integration, thank you very much.
(One very nice thing about AI-assisted programming: Claude is not offended by duplicating the same code over and over and using utterly awful APIs, and Claude feels no particular compulsion to get distracted thinking about how the APIs I’m targeting could be made to be less awful. Try asking the Home Assistant docs how an integration is supposed to handle a hot-added entity after an integration finishes setup: you will not get an answer. Ask Claude and it will cheerfully copy the ludicrous and obviously inappropriate solution used by other integrations. Sometimes semi-blindly doing something that works is the right solution when writing piles of glue code.)
Curious to hear what integration you are working on.
I am thinking about making this (1) into an integration also. It is an alternative way to visualize forecasts. It uses the dew point instead of temperature. I currently use a dashboard that shows the site on an iframe within Home Assistant.
They are annoying to setup and maintain and contain footguns. I will still use them with prek though because they save dev cycles back-and-forth with CI more than they hurt. I aim to have the hooks complete in under 1 second total. If it saves even a single CI cycle, I think that's a win time wise.
To get a commit history that makes sense. It’s not supposed to document in what order you did the work, but why and how a change was made. when I’m knee deep in some rewrite and realize I should have changed something else first, I can just go do that change, then come back and rebase.
And in the feature branches/merge requests, I don’t merge, only rebase. Rebasing should be the default workflow. Merging adds so many problems for no good reason.
There are use cases for merging, but not as the normal workflow.
That is just not true. Merging is so much less work and the branch history clearly indicates when merging has happened.
With rebasing, there could be a million times the branch was rebased and you would have no idea when and where something got broken by hasty conflict resolution.
When conflicts happen, rebasing is equivalent to merging, just at the commit level instead of at branch level, so in the worst case, developers are met with conflict after conflict, which ends up being a confusing mental burden on less experienced devs and certainly a ”trust the process” kind of workflow for experienced ones as well.
Your real commit history is irrelevant. I don't care too much about how you came to a particular state.
The overall project history though, the clarity of changes made, and that bisecting reliably works are important to me.
Or another way; the important unit is whatever your unit of code review is. If you're not reviewing and checking individual commits, they're just noise in the history; the commit messages are not clear and I cannot reliably bisect on them (since nobody is checking that things build).
I commit anything and everything. Build fails? I still commit. If there is a stopping point and I feel like I might want to come back to this point, I commit.
I am violently against any pre commit hook that runs on all branches. What I do on my machine on my personal branch is none of your business.
I create new branches early and often. I take upstream changes as they land on the trunk.
Anyway, this long winded tale was to explain why I rebase. My commits aren't worth anything more than stopping points.
At the end, I create a nice branch name and there is usually only one commit before code review.
I don't want to see any irrelevant history several years later, so I enforce linear history on the main branch in all projects that I work on. So far, nobody complained, and I've never seen a legitimate reason to deviate from this principle if you follow a trunk based release model.
why would you lose commit history? You are just picking up a set of commits and reapplying them. Of course you can use rebase for more things, but rebase does not equal losing commit history.
Rebase always rewrites history, losing the original commits and creating new ones. They might have the same changes and the same commit messages, but they are different commits.
Unlike some other common operations that can be easily cargo-culted, rebasing is somewhat hard to do correctly when you don't understand git, so people who don't understand git get antagonistic towards it.
If it is something like repo for configuration management I can understand that because its often a lot of very small changes and so every second commit would be a merge, and it's just easier to read that way.
Thank you. I don't need to "fix" a commit before it ends up on a remote branch. Sometimes I expect a commit to pass checks and sometimes I don't. Frankly, don't even run pre-push hooks. Just run the checks in CI when I push. You'd better be doing that anyway before I'm allowed to push to a production branch, so stop breaking my git workflows and save me the time of running duplicate checks locally.
Also, if most developers are using one editor, configure that editor to run format and auto-fix lint errors. That probably cleans up the majority of unexpected CI failures.
Pre-commit and pre-push hooks are something developers can voluntarily add (or enable) to shorten the latency until they get feedback: instead of the CI rejecting their PR, they can (optionally!) get a local message about it.
Otherwise, I agree, your project can not rely on any checks running on the dev machine with git.
Appreciate the perspective. I've worked on projects where hooks are auto-configured, and pre-commit is just never something that's going to agree with me.
I prefer to be able to push instantly and get feedback async, because by the time I've decided I'm done with a change, I've already run the tests for it. And like I said, my editor is applying formatting and lints, so those fail more rarely.
But, if your pre-push checks are fast (rather than ~minutes), I can see the utility! It sucks to get an async failure for feedback that can be delivered quickly.
In our case same hook is re-ran on server side; the pre-commit hook is purely to increase velocity
... and cos most people using git will have to take a second if the hook returns to them "hey, your third commit is incorrect, you forgot ticket number"
Literally the only pre-commit hook I've ever "liked" has been one to look for files over ~1/2MB, and error with a message describing how to bypass the check (env var). It stops the mistake at the easiest-to-identify point, and helped teach a lot of people about how to set gitignore or git-attributes correctly when it is most relevant and understandable. Every single other one has been a massive pain at some point...
... but even that one took several rounds of fiddling and complexifying to get it to behave correctly (e.g. merging commits with already-committed bypassed binaries should be allowed. how do you detect that? did you know to check for that scenario when building the hook? someone's gonna get bitten by it, and there are dozens of these cases).
So yea. Agreed. Do not use pre-commit hooks, they're far more trouble than they seem, and the failure modes / surprises are awful and can be quite hard to figure out to laymen who are experiencing it.
A bit less enraged: pre-commit hooks should be pure functions. They must not mutate the files being committed. At best, they should generate a report. At worst, they could reject a commit (e.g. if it contains a private key file included by mistake).
In my experience pre-commit hooks are most often used to generate a starting commit message.
To put it more bluntly, pre-commit hooks are pre-commit hooks, exactly what it says on the tin. Not linting hooks or checking hooks or content filters. Depending on what exactly you want to do, they may or may not be the best tool for the job.
To put it even more bluntly, if you are trying to enforce proper formatting, pre-commit hooks are absolutely the wrong tool for the job, as hooks are trivially bypassable, and not shared when cloning a repo, by design.
> In my experience pre-commit hooks are most often used to generate a starting commit message.
The `prepare-commit-msg` hook is a better place to do that as it gives the hook some context about the commit (is the user amending an existing commit etc.)
> To put it even more bluntly, if you are trying to enforce proper formatting, pre-commit hooks are absolutely the wrong tool for the job, as hooks are trivially bypassable, and not shared when cloning a repo, by design.
They aren't a substitute for server post-receive hooks but they do help avoid having pushes rejected by the server.
This article is very much "you're holding it wrong"
> They tell me I need to have "proper formatting" and "use consistent style". How rude.
> Maybe I can write a pre-commit hook that checks that for me?
git filter is made for that. It works. There are still caveats (it will format whole file so you might end up commiting changes that are formatting fixed of not your own code).
Pre-commit is not for formatting your code. It's for checking whether commit is correct. Checking whether content has ticket ID, or whether the files pass even basic syntax validation
> Only add checks that are fast and reliable. Checks that touch the network should never go in a hook. Checks that are slow and require an update-to-date build cache should never go in a hook. Checks that require credentials or a running local service should never go in a hook.
If you can do that, great! If you can't (say it's something like CI/CD repo with a bunch of different language involved and not every dev have setup for everything to be checked locally), having to override it to not run twice a year is still preferable over committing not working code. We run local checks for stuff that make sense (checking YAML correctness, or decoding encrypted YAMLs with user key so they also get checked), but the ones that don't go remote. It's faster. few ms RTT don't matter when you can leverage big server CPU to run the checks faster
Bonus points, it makes the pain point - interactive rebases - faster, because you can cache the output for a given file hash globally so existing commits during rebase take miliseconds to check at most
> Don't set the hook up automatically. Whatever tool you use that promises to make this reliable is wrong. There is not a way to do this reliably, and the number of times it's broken on me is more than I can count. Please just add docs for how to set it up manually, prominantly featured in your CONTRIBUTING docs. (You do have contributing docs, right?)
DO set it up automatically (or as much as possible. We have script that adds the hooks and sets the repo defaults we use). You don't want new developer to have to spend half a day setting up some git nonsense only to get it wrong. And once you change it, just rerun it
Pre-push might address some of the pain points but it doesn't address the biggest - it puts the developer in a "git hole" if they have something wrong in commit, because while pre-commit will just... cancel the commit till dev fixes it, with pre-push they now need to dig out knowledge on how to edit or undo existing commits
the pre-commit framework does not abstract away “hooks shouldn’t be run during a rebase”, nor “hooks should be fast and reliable”, nor “hooks should never change the index”.
Not sure how you got to that conclusion, as the pre-commit framework does indeed abstract them away. Maybe you're confusing it with something else?
> hooks shouldn’t be run during a rebase
The pre-commit framework doesn't run hooks during a rebase.
> hooks should be fast and reliable
The pre-commit framework does its best to make hooks faster (by running them in parallel if possible) and more reliable (by allowing the hook author to define an independent environment the hook runs in), however it's of course still important that the hooks themselves are properly implemented. Ultimately that's something the hook author has to solve, not the framework which runs them.
> hooks should never change the index
As I read it the author says hooks shouldn't change the working tree, but the index insteead and that's what the pre-commit framework does if hooks modify files.
Personally I prefer configuring hooks so they just print a diff of what they would've changed and abort the commit, instead of letting them modify files during a commit.
This was a really interesting read. I'd highly recommend it for anybody who's setting up (or currently maintains) a pre-commit workflow for their developers.
I want to add one other note: in any large organization, some developers will use tools in ways nobody can predict. This includes Git. Don't try to force any particular workflow, including mandatory or automatically-enabled hooks.
Instead, put what you want in an optional pre-push hook and also put it into an early CI/CD step for your pull request checker. You'll get the same end result but your fussiest developers will be happier.
> This includes Git. Don't try to force any particular workflow, including mandatory or automatically-enabled hooks.
And with git, you can even make anything that happens on the dev machines mandatory.
Anything you want to be mandatory needs to go into your CI. Pre-commit and pre-push hooks are just there to lower CI churn, not to guarantee anything.
(With the exception of people accidentally pushing secrets. The CI is too late for that, and a pre-push hook is a good idea.)
A good analogy is: git hooks are client-side validation; CI is server-side validation, aka the only validation you can trust.
> with git, you can even make anything that happens on the dev machines mandatory
s/can/can't?
2 replies →
You can run git commit with a --no-verify flag to skip these hooks
I can second that. If there are multiple commits: https://github.com/tummychow/git-absorb is handy to add formatting changes into the right commit after commits already happened.
It looks like git absorb rewrites history. Doesn’t that break your previously pushed branch?
9 replies →
There's a weird thing happening on my current project. Sometimes I merge main into my branch and it fails. What fails is the pre-commit hook on the merge commit. Changes in main fail the linting checks in the pre-commit hook. But they still ended up in main, somehow. So the checks on the PR are apparently not as strict as the checks on the pre-commit hook. As a result, many developers have gotten used to committing with `--no-verify`, at which point, what is even the point of a pre-commit hook?
And sometimes I just want to commit work in progress so I can more easily backtrack my changes. These checks are better on pre-push, and definitely should be on the PR pipeline, otherwise they can and will be skipped.
Anyway, thanks for giving me some ammo to make this case.
For the sake of argument, let's say you have a check that caps the number of lines per file and that both you and main added lines in the same file. It's not too weird if that check fails only after merge, right?
One benign example of something that can break after merge even if each branch is individually passing pre-merge. In less benign cases it will your branch merged to main and actual bugs in the code.
One reason to not allow "unclean merges" and enforced incoming branches to be rebased up-to-date to be mergable to the main branch.
You probably want to run the checks on each commit to main in CI and not rely on them being consistently run by contributors.
You do you but I find rebasing my branch on main instead of merging makes me scratch mybhead way less.
Your hook really shouldn't be running on the merge commit unless you have conflicts in your merge.
3 replies →
Why not take the best of both worlds? Use pre-commit hooks for client-side validation, and run the same checks in CI as well. I’ve been using this setup for years without any issues.
One key requirement in my setup is that every hook is hermetic and idempotent. I don’t use Rust in production, so I can’t comment on it in depth, but for most other languages—from clang-format to swift-format—I always download precompiled binaries from trusted sources (for example, the team’s S3 storage). This ensures that the tools run in a controlled environment and consistently produce the same results.
Case in point: https://www.jj-vcs.dev/
> I want to add one other note: in any large organization, some developers will use tools in ways nobody can predict. This includes Git. Don't try to force any particular workflow, including mandatory or automatically-enabled hooks.
you will save your org a lot of pain if you do force it, same as when you do force a formatting style rather than letting anyone do what they please.
You can discuss to change it if some parts don't work but consistency lowers the failures, every time.
Enforcement should live in CI. Into people's dev environments, you put opt-in "enablement" that makes work easier in most cases, and gets out of the way otherwise.
1 reply →
It's a good thing you can't force it, because `git commit -n` exists. (And besides, management of the `.git/hooks` directory is done locally. You can always just wipe that directory of any noxious hooks.)
I can accept (but still often skip, with `git push -n`) a time-consuming pre-push hook, but a time-consuming and flaky pre-commit hook is totally unacceptable to my workflows and I will always find a way to work around it. Like everyone else is saying, if you want to enforce some rule on the codebase then do it in CI and block merges on it.
I'm the type of developer who always have a completely different way of working. I hate pre-commit hooks, and agree that pre-push + early step is CI is the right thing to do.
You dont have to install hooks. Its that simple.
Be prepared to have your PR blocked tho.
You shouldn't be relying on hooks to maintain the integrity of your codebase, and I'm not seeing anything here that makes me want to avoid `pre-commit` (or, more literally, the https://pre-commit.com/ tool). CI must be the source of truth for whether a commit is acceptable.
If you're using `pre-commit` the tool, not merely the hook, you can also use something like https://github.com/andrewaylett/pre-commit-action to run the tool in CI. It's a really good way to share check definitions between local development and CI, meaning you've shifted your checks to earlier in the pipeline.
I use Jujutsu day-to-day, which doesn't even support pre-commit hooks. But the tooling is still really useful, and making sure we run it in CI means that we're not relying on every developer having the hooks set up. And I have JJ aliases that help pre-commit be really useful in a JJ workflow: https://github.com/andrewaylett/dotfiles/blob/7a79cf166d1e7b...
pre-commit is a convenience for the developer to gain confidence that pre-flight checks in CI will pass. I've found trying to make them automatic just leads to pain when they interact with any non-trivial git feature and don't handle edge cases.
I've been much much happier just having a little project specific script I run when I want to do formatting/linting.
pre-commit is just a bad way to do this. 99.9% of my commits won't pass CI. I don't care. I run `git wip` which is an alias for `git commit -am "WIP"` about every 15 minutes during the day. Whenever things are in a running state. I often go back through this history on my branch to undo changes or revisit decisions, especially during refactors, especially when leveraging AI. When the most work you can lose is about 15 minutes you stop looking before you leap. Sometimes a hunch pays off and you finish a very large task in a fraction of the time you might have spent if you were ploddingly careful. Very often a hunch doesn't pay off and you have to go recover stuff from your git history, which is very easy and not hard at all once you build that muscle. The cost/benefit isn't even close, it makes me easily 2x faster when refactoring code or adding a feature to existing code, probably more. It is 'free' for greenfield work, neither helping nor really hurting. At the end the entire branch is squashed down to one commit anyway, so why would you ever not want to have free checkpoints all the time?
As I'm saying this, I'm realizing I should just wire up Emacs to call `git add {file_being_saved} && git commit -am "autocheckpoint"` every time I save a file. (I will have to figure out how to check if I'm in the middle of some operation like a merge or rebase to not mess those up.)
I'm perfectly happy to have the CI fail if I forget to run the CI locally, which is rare but does happen. In that case I lose 5 minutes or whatever because I have to go find the branch and fix the CI failure and re-push it. The flip side of that is I rarely lose hours of work, or end up painting myself in a corner because commit is too expensive and slows me down and I'm subconsciously avoiding it.
27 replies →
Not everyone in my team wires up their pre-commit hook to run the pre-commit tool. I use JJ, so I don't even have a pre-commit hook to wire up. But the tool is useful.
The key thing (that several folk have pointed out) is that CI runs the canonical checks. Using something like pre-commit (the tool) makes it easier to at least vaguely standardise making sure that you can run the same checks that CI will run. Having it run from the pre-commit hook fits nicely into many workflows, my own pre-JJ workflow included.
I've worked in several projects where running the tests locally automatically install pre-commit hooks and I've wanted to commit warcrimes because of it.
Don't do that, just dont.
At my last job, we ran all of our tests, linting/formatting, etc. through pre-commit hooks. It was apparently a historical relic of a time where five developers wanted to push directly to master without having to configure CI.
I too was about to become a war criminal.
To bring up jujutsu, `jj fix` (https://docs.jj-vcs.dev/latest/cli-reference/#jj-fix) is a more refined way of ensuring formatting in commits. It runs a formatting command with the diff in stdin and uses the results printed to stdout. It can simplify merges and rebases history to ensure all your commits remain formatted (so if you enable a new formatting option, it can remove the need for a special format/style fix commit in your mutable set). Hard to go back to pre-commit hooks after using jj fix (also hard to use git after using jj ;) ).
The downside currently (although I've been assured this will be fixed one day) is that it doesn't support running static analysis over each commit you want to fix.
My git rebase workflow often involves running `git rebase -x "cargo clippy -- --deny=warnings"`. This needs a full checkout to work and not just a single file input
Yeah, to add some context for people reading this, jj fix works best for edits local to the diff, and it’s meant for edits mostly. With some trickery you could run some analysis, but it’s not what jj fix is meant for right now.
The intended future solution is `jj run` (https://docs.jj-vcs.dev/latest/design/run/), which applies similar ideas to more general commands.
I keep a couple of jj aliases that apply the `pre-commit` tool to a commit or a tree of commits:
https://github.com/andrewaylett/dotfiles/blob/7a79cf166d1e7b...
What I really want is some way within jj to keep track of which commits have been checked and which are currently failing, so I can template it into log lines.
Came here to mention jj fix. It is a fundamentally more elegant way of doing things.
I’ve seen similar issues once hooks start doing more than fast checks. The moment they become stateful or depend on external context, they stop being guardrails and start being a source of friction. In practice, keeping them boring and deterministic seems to matter more than catching everything early.
A workflow that works well is one that takes the better ideas from Meta's "hg"+"arcanist"+edenfs+"phabricator" diff and land strategy. Git, by itself, is too low-level for shared, mostly single-source-of-truth yet distributed dev.
Make test cases all green locally before pushing, but not in a way that interferes with pushing code and they shouldn't be tied to a particular (D)VCS. Allow uploading all of the separate proposed PRs you want in a proposed "for review" state. After a PR is signed-off and sent for merging, it goes into a linearizing single source of truth backed by an automated testing/smoke testing process before they land "auto-fast-forwarded" in a mostly uncontrolled manner that doesn't allow editing the history directly. Standardization and simplicity are good, and so is requiring peer review of code before it's accepted for existing, production, big systems.
Disallow editing trunk/main/master and whenever there's merge conflict between PRs, manual rebasing of one or the other is required. Not a huge deal.
Also, have structured OWNERS files that include people and/or distribution list(s) of people who own/support stuff. Furthermore, have a USERS file that keeps lists of people who would be affected by restarting/interrupting/changing a particular codebase/service for notification purposes too. In general, monorepo and allowing submitting code for any area by anyone are roughly good approaches.
I think the examples given in the post are just done poorly.
Lefthook with glob+stage_fixed for formatters makes one of the issues raised a complete non-issue.
I'll write a in-depth post about it maybe within the next week or so, been diving into these in my hobby projects for a year or so.
My post can be found here: https://news.ycombinator.com/item?id=46409443
On any project where pre-commit hooks are used, the first thing I do is disable them. What I do when the code is on my side of the line isn't your business.
I agree on the other side of the fence! I quite like precommit when I use it, but I've never imposed it on any of my projects. Some people use commits sporadically then squash down- I really don't feel comfortable breaking someone's personal workflow to that degree.
I almost always have a "this cicd must pass to merge" job, that includes linting etc, and then use squash commits exclusively when merging.
Yes, big fan of enforcing standards via CI/CD workflows. Any rules a group wishes to be met should be in there. As long as someone meets those rules by the time they open a PR, I don't care how they get there.
Would you add type: ignore to all the files too?
My coworker did that the other day and I'm deciding how to respond.
Sure, if the warning levels are poorly tuned I might configure my LSP to ignore everything and loosen the enforcement in the build steps until I'm ready to self review. Something I can't stand with Typescript for example is when the local development server has as strict rules as the production builds. There's no good reason to completely block doing anything useful whatsoever just because of an unused variable, unreachable code, or because a test that is never going to get committed dared to have an 'any' type.
1 reply →
Not if I push my branch it to origin. But until I do that, it's none of your concern if I do or don't. Once it gets thrown over the wall to my colleagues and/or the general public, that's the point where I should be conforming to repo norms. Not before then.
Your hook can't observe a simple env var, if you are stepping off the happy path of your workflow? E.g. `GIT_HOOK_BYEBYE` = early return in hook script. Article seems a little dramatic. If you write a pre-commit hook that is a pain in your own arse, how does that make them fundamentally broken?
I feel like I found better git commands for this, that don't have these problems. It's not perfect, sure, but works for me.
The pre commit script (https://github.com/ThomasHabets/rustradio/blob/main/extra/pr...) triggers my executor which sets up the pre commit environment like so: https://github.com/ThomasHabets/rustradio/blob/main/tickbox/...
I run this on every commit. Sure, I have probably gone overboard, but it has prevented problems, and I may be too picky about not having a broken HEAD. But if you want to contribute, you don't have to run any pre commit. It'll run on every PR too.
I don't send myself PRs, so this works for me.
Of course I always welcome suggestions and critique on how to improve my workflow.
And least nothing is stateful (well, it caches build artefacts), and aside from "cargo deny" no external deps.
Only a minor suggestion: git worktrees is a semi-recent addition that may be nicer than your git archive setup
Yep, all that and they’re also annoying. Version control tools are not supposed to argue - do what you’re told. If I messed up, the branch build will tell me.
Is that the difference between forced pre commits vs opt in? I don't want to commit something that doesn't build. If nothing else it makes future bisects annoying.
But if I intend to squash and merge, then who cares about intermediate state.
> I don't want to commit something that doesn't build.
This is a really interesting perspective. Personally I commit code that will fail the build multiple times per day. I only care that something builds at the point it gets merged to master.
15 replies →
The first step of which I usually have as pre-commit run --all-files (using the third-party tool of the same name as git feature) - so running locally automatically on changed files just gives me an early warning. It can be nice to run unit tests locally too, btw.
I’ve used pre-commit the tool and now prek for the better part of a decade and never had these issues even using rebase flows exclusively. This seems like an operator error.
Without pre-commit you should squash on merge. That way main is atomic. But squashing loses granularity. A single merge commit might both refactor a function and implement a feature. You could split it into two pull requests, but maybe the feature depends on the refactor. Now your feature branch needs to cherry pick from the refactor branch. What if the refactor branch makes further changes prior to being merged? Now your feature branch has conflicts.
With pre-commit you can disable squashing on merge. You'll have a refactor commit and a feature commit. Changes to the refactor can be --fixup followed by rebase. I find this much easier than juggling two pull requests.
I accept that this is only viable with buy-in from everyone on the team, and I wouldn't advise it for teams over 10 people. But for small teams, I love pre-commit and trunk-based development.
Good to see this having spent the last 10 years arguing with people who configured pre commit hooks then failed to understand the bad consequences.
I don’t really like pre-commit hooks, but I do think that git-secrets is a very useful one since once a secret is in the commit history, it’s a hassle (though not impossible) to remove it. All other issues can and should be caught early as an optionally blocking step in a CI/CD pipeline build.
Running on the working tree is mostly okay - just `exit 1` if changes were made and allow the user to stage+commit new changes. It isn't perfect but it doesn't require checking out a new tree.
What if I've already fixed the format issue (but not staged it). The pre-commit hook will pass, but it's not doing what the author intended (preventing unformated code from being committed).
What if I've only staged one part of a file, but the pre-commit hook fails on the unstaged portions, which should be fine since I'm not commiting or pushing those changes.
You can stash it or `git commit -n`. Perfect is the enemy of good enough.
this completely breaks `git add -p`.
You can either do another `git add -p` after to stage the fixed formatting or do `git add -pn`
Home Assistant takes this one step farther. There is a pre-run hook that goes out its way to make it hard to run an “integration” that doesn’t meet its quality standard. I get that they don’t want to get PRs for integrations that don’t check the checklist, but as someone writing an integration (which I’m currently doing, for better or for worse), I want to run my own incomplete integration, thank you very much.
(One very nice thing about AI-assisted programming: Claude is not offended by duplicating the same code over and over and using utterly awful APIs, and Claude feels no particular compulsion to get distracted thinking about how the APIs I’m targeting could be made to be less awful. Try asking the Home Assistant docs how an integration is supposed to handle a hot-added entity after an integration finishes setup: you will not get an answer. Ask Claude and it will cheerfully copy the ludicrous and obviously inappropriate solution used by other integrations. Sometimes semi-blindly doing something that works is the right solution when writing piles of glue code.)
Curious to hear what integration you are working on.
I am thinking about making this (1) into an integration also. It is an alternative way to visualize forecasts. It uses the dew point instead of temperature. I currently use a dashboard that shows the site on an iframe within Home Assistant.
1. https://observablehq.com/@drio/shader-galaxy
Lutron QS Standalone, to control Lutron devices via the QSE-CI-NWK-E.
So many ppl do not understand you dont have to install precommit hooks or you can disable them locally when working on a temp brach is astounding.
They are annoying to setup and maintain and contain footguns. I will still use them with prek though because they save dev cycles back-and-forth with CI more than they hurt. I aim to have the hooks complete in under 1 second total. If it saves even a single CI cycle, I think that's a win time wise.
why do people rebase so often? shouldn't it be excluded from the usual workflows as you are losing commit history as well?
To get a commit history that makes sense. It’s not supposed to document in what order you did the work, but why and how a change was made. when I’m knee deep in some rewrite and realize I should have changed something else first, I can just go do that change, then come back and rebase.
And in the feature branches/merge requests, I don’t merge, only rebase. Rebasing should be the default workflow. Merging adds so many problems for no good reason.
There are use cases for merging, but not as the normal workflow.
That is just not true. Merging is so much less work and the branch history clearly indicates when merging has happened.
With rebasing, there could be a million times the branch was rebased and you would have no idea when and where something got broken by hasty conflict resolution.
When conflicts happen, rebasing is equivalent to merging, just at the commit level instead of at branch level, so in the worst case, developers are met with conflict after conflict, which ends up being a confusing mental burden on less experienced devs and certainly a ”trust the process” kind of workflow for experienced ones as well.
14 replies →
Your real commit history is irrelevant. I don't care too much about how you came to a particular state.
The overall project history though, the clarity of changes made, and that bisecting reliably works are important to me.
Or another way; the important unit is whatever your unit of code review is. If you're not reviewing and checking individual commits, they're just noise in the history; the commit messages are not clear and I cannot reliably bisect on them (since nobody is checking that things build).
I write really poopy commit messages. Think "WIP" type nonsense. I branch off of the trunk, even my branch name is poopy like
feature/{first initial} {last initial} DONOTMERGE {yyyy-MM-dd-hh-mm-ss}
Yes, the branch name literally says do not merge.
I commit anything and everything. Build fails? I still commit. If there is a stopping point and I feel like I might want to come back to this point, I commit.
I am violently against any pre commit hook that runs on all branches. What I do on my machine on my personal branch is none of your business.
I create new branches early and often. I take upstream changes as they land on the trunk.
Anyway, this long winded tale was to explain why I rebase. My commits aren't worth anything more than stopping points.
At the end, I create a nice branch name and there is usually only one commit before code review.
Isn't your tale more about squashing than rebasing?
5 replies →
I don't want to see any irrelevant history several years later, so I enforce linear history on the main branch in all projects that I work on. So far, nobody complained, and I've never seen a legitimate reason to deviate from this principle if you follow a trunk based release model.
why would you lose commit history? You are just picking up a set of commits and reapplying them. Of course you can use rebase for more things, but rebase does not equal losing commit history.
Rebase always rewrites history, losing the original commits and creating new ones. They might have the same changes and the same commit messages, but they are different commits.
I think that only the most absolutely puritan git workflows wouldn’t allow a local rebase.
The sum of the re-written changes still amount to the same after a rebase. When would you need access to the pre-rebase history, and to what end?
Well, sometimes you do if you made a mistake, but that's already handled by the reflog.
Because gerrit.
But even if i wasn't using gerrit, sometimes its the easiest way to fix branches that are broken or restructure your work in a more clear way
really; keep reading about all the problems ppl have “every time I rebase” and I wonder what tomfoolery they’re really up to
Unlike some other common operations that can be easily cargo-culted, rebasing is somewhat hard to do correctly when you don't understand git, so people who don't understand git get antagonistic towards it.
1 reply →
If it is something like repo for configuration management I can understand that because its often a lot of very small changes and so every second commit would be a merge, and it's just easier to read that way.
... for code, honestly no idea
If only there was a way to ignore merges from git log, or just show the merges…
(Hint: --no-merges, --merges)
Thank you. I don't need to "fix" a commit before it ends up on a remote branch. Sometimes I expect a commit to pass checks and sometimes I don't. Frankly, don't even run pre-push hooks. Just run the checks in CI when I push. You'd better be doing that anyway before I'm allowed to push to a production branch, so stop breaking my git workflows and save me the time of running duplicate checks locally.
Also, if most developers are using one editor, configure that editor to run format and auto-fix lint errors. That probably cleans up the majority of unexpected CI failures.
Pre-commit and pre-push hooks are something developers can voluntarily add (or enable) to shorten the latency until they get feedback: instead of the CI rejecting their PR, they can (optionally!) get a local message about it.
Otherwise, I agree, your project can not rely on any checks running on the dev machine with git.
Appreciate the perspective. I've worked on projects where hooks are auto-configured, and pre-commit is just never something that's going to agree with me.
I prefer to be able to push instantly and get feedback async, because by the time I've decided I'm done with a change, I've already run the tests for it. And like I said, my editor is applying formatting and lints, so those fail more rarely.
But, if your pre-push checks are fast (rather than ~minutes), I can see the utility! It sucks to get an async failure for feedback that can be delivered quickly.
1 reply →
In our case same hook is re-ran on server side; the pre-commit hook is purely to increase velocity
... and cos most people using git will have to take a second if the hook returns to them "hey, your third commit is incorrect, you forgot ticket number"
I don't want roundtrips to my CI which easily takes a minute and pushes me to look at yet another window. Pre-commit hooks save me so much time.
Literally the only pre-commit hook I've ever "liked" has been one to look for files over ~1/2MB, and error with a message describing how to bypass the check (env var). It stops the mistake at the easiest-to-identify point, and helped teach a lot of people about how to set gitignore or git-attributes correctly when it is most relevant and understandable. Every single other one has been a massive pain at some point...
... but even that one took several rounds of fiddling and complexifying to get it to behave correctly (e.g. merging commits with already-committed bypassed binaries should be allowed. how do you detect that? did you know to check for that scenario when building the hook? someone's gonna get bitten by it, and there are dozens of these cases).
So yea. Agreed. Do not use pre-commit hooks, they're far more trouble than they seem, and the failure modes / surprises are awful and can be quite hard to figure out to laymen who are experiencing it.
(much of this is true for all git hooks imo)
Not fundamentally broken, just broken on certain use cases where'd I have to do something like
eg. (using the typical aliases)
A bit less enraged: pre-commit hooks should be pure functions. They must not mutate the files being committed. At best, they should generate a report. At worst, they could reject a commit (e.g. if it contains a private key file included by mistake).
> e.g. if it contains a private key file included by mistake
Thanks - this is the first example of a pre-commit hook that I can see value in.
Remember that such key will be copied into the repository on `git add` already and will stay there until garbage collected.
1 reply →
In my experience pre-commit hooks are most often used to generate a starting commit message.
To put it more bluntly, pre-commit hooks are pre-commit hooks, exactly what it says on the tin. Not linting hooks or checking hooks or content filters. Depending on what exactly you want to do, they may or may not be the best tool for the job.
To put it even more bluntly, if you are trying to enforce proper formatting, pre-commit hooks are absolutely the wrong tool for the job, as hooks are trivially bypassable, and not shared when cloning a repo, by design.
> In my experience pre-commit hooks are most often used to generate a starting commit message.
The `prepare-commit-msg` hook is a better place to do that as it gives the hook some context about the commit (is the user amending an existing commit etc.)
> To put it even more bluntly, if you are trying to enforce proper formatting, pre-commit hooks are absolutely the wrong tool for the job, as hooks are trivially bypassable, and not shared when cloning a repo, by design.
They aren't a substitute for server post-receive hooks but they do help avoid having pushes rejected by the server.
This article is very much "you're holding it wrong"
> They tell me I need to have "proper formatting" and "use consistent style". How rude.
> Maybe I can write a pre-commit hook that checks that for me?
git filter is made for that. It works. There are still caveats (it will format whole file so you might end up commiting changes that are formatting fixed of not your own code).
Pre-commit is not for formatting your code. It's for checking whether commit is correct. Checking whether content has ticket ID, or whether the files pass even basic syntax validation
> Only add checks that are fast and reliable. Checks that touch the network should never go in a hook. Checks that are slow and require an update-to-date build cache should never go in a hook. Checks that require credentials or a running local service should never go in a hook.
If you can do that, great! If you can't (say it's something like CI/CD repo with a bunch of different language involved and not every dev have setup for everything to be checked locally), having to override it to not run twice a year is still preferable over committing not working code. We run local checks for stuff that make sense (checking YAML correctness, or decoding encrypted YAMLs with user key so they also get checked), but the ones that don't go remote. It's faster. few ms RTT don't matter when you can leverage big server CPU to run the checks faster
Bonus points, it makes the pain point - interactive rebases - faster, because you can cache the output for a given file hash globally so existing commits during rebase take miliseconds to check at most
> Don't set the hook up automatically. Whatever tool you use that promises to make this reliable is wrong. There is not a way to do this reliably, and the number of times it's broken on me is more than I can count. Please just add docs for how to set it up manually, prominantly featured in your CONTRIBUTING docs. (You do have contributing docs, right?)
DO set it up automatically (or as much as possible. We have script that adds the hooks and sets the repo defaults we use). You don't want new developer to have to spend half a day setting up some git nonsense only to get it wrong. And once you change it, just rerun it
Pre-push might address some of the pain points but it doesn't address the biggest - it puts the developer in a "git hole" if they have something wrong in commit, because while pre-commit will just... cancel the commit till dev fixes it, with pre-push they now need to dig out knowledge on how to edit or undo existing commits
> they now need to dig out knowledge on how to edit or undo existing commits
This knowledge is a crucial part of effective use of git every day, so if some junior dev has to learn it quick it's doing them a favor.
[dead]
The pre-commit framework [1] abstracts all these issues away and offers a bunch of other advantages as well.
[1]: https://pre-commit.com/
the pre-commit framework does not abstract away “hooks shouldn’t be run during a rebase”, nor “hooks should be fast and reliable”, nor “hooks should never change the index”.
Not sure how you got to that conclusion, as the pre-commit framework does indeed abstract them away. Maybe you're confusing it with something else?
> hooks shouldn’t be run during a rebase
The pre-commit framework doesn't run hooks during a rebase.
> hooks should be fast and reliable
The pre-commit framework does its best to make hooks faster (by running them in parallel if possible) and more reliable (by allowing the hook author to define an independent environment the hook runs in), however it's of course still important that the hooks themselves are properly implemented. Ultimately that's something the hook author has to solve, not the framework which runs them.
> hooks should never change the index
As I read it the author says hooks shouldn't change the working tree, but the index insteead and that's what the pre-commit framework does if hooks modify files.
Personally I prefer configuring hooks so they just print a diff of what they would've changed and abort the commit, instead of letting them modify files during a commit.
1 reply →