← Back to context

Comment by falcolas

7 years ago

Anytime you normalize a workflow around using unsafe commands, you risk desensitization to those very warning messages you're relying on to save you.

If you have one rebase with a force push a week, people will quickly stop asking "who did the force push", since it has become normal. They'll instead just let the force push come through to their local repo, just to find out that the force push rolled the repo back a year. Now productivity is halted dead until someone goes in and fixes it and makes yet another force push.

The worst part to me is that it makes the repo lossy. The entire purpose of having version control is actively subverted in the name of "clean".

I think this is fine, but then I'm comfortable with what a force-push is (a remote reset) and I know that the commits still exist even when the ref is moved, and how to inspect the history of a bare repo to move the ref back.

I don't understand the business requirement for a non-lossy repo. In my experience, we need to be able to link a release to a specific commit, and we need to show that the changes were reviewed & tested before they were deployed. We use tags for this.

I also think the entire purpose of source control is to be able to answer question like "who changed this? when was it changed? why?".

> Now productivity is halted dead

Because of a bad push to a single repo? I really like the distributed nature of git, which lets me stay productive even when other repos and branches elsewhere are having issues. I would avoid treating git as 'SVN but newer'.