High Performance Git

8 hours ago (gitperf.com)

I never faced git performance issues when working with code. Guess my repos weren't bit. But when I tried to use git as a versioned database of changes in my pet project, I learned a lot about indexes, compacting, etc. Article covers a lot and is very helpful!

Git is industry standard, because for what it give you it's a remarkably robust and simple program to use. We're all vaguely aware that the internals are complex, but the UX is clean and usable enough that the complexity usually doesn't leak out.

But the day this breaks down and I have to deal with bloom filters, packfiles, maintaining the git garbage collector or rerere cleanup, is the day I switch our codebase to a centralized VCS.

This stuff is cool to learn about; but it's 5 layers removed from anything I want to be thinking about in my day to day work.

  • I'm pretty sure git is industry standard almost entirely entirely because GitHub exists. I very much disagree that the UX is clean. The cli is more than a bit of a mess.

    • > I'm pretty sure git is industry standard almost entirely entirely because GitHub exists.

      Nah, I remember that time vividly, Github became a thing about a year or two after it was already very much taking the lead.

      GitHub became GitHub because git was the winner. There were alternative hubs that supported bazaar and mercurial and whatnot, but git won because for most people, Linus and the kernel team being behind it was reason enough to trust it.

      (and I say this as someone who liked hg more than git)

I'm only on to chapter two and already it's explained some plumbing details that I somehow have missed all these years. This is great

I've always wanted to see a book that describes git for the common man and gives them tons of examples for how to use it to do productive things.

Even for a small office, git can be immensely useful. Entire production line workflows can be implemented with git .. if only folks would learn to use it productively.

Its not just for development. Writers can use it productively. Accountants too.

It always kind of irks me that Git hasn't just been folded into the OS front-end UI by any of the OS vendors .. it'd be so revolutionary to give common folks an easy way to manage the timeline/history of their computer use using git.

  • The obvious reason is that most file formats used by writers, accountants, etc. are binary files which do not very much benefit from git.

> LFS adds its own operational overhead.

Seemingly seconds on every remote-touching command, even on a very small repo.

  • What is worse is that for about half a year or so, I now have to authenticate my ed25519-sk key with my Yubikey thrice (!) when using LFS. On every push.

I've been wanting to ask this:

Why isn't

    git clone --depth 1 ... 

the default?

I would guess that for at least 90% of the repos I clone, I just want to install something. Even for the rest, I might hack on the code but seldom look into the history. If I do then I could do a `git fetch` at that point and save the bandwidth and disk space the rest of the time.