Comment by dshacker
3 days ago
> For context, if we tried this with “vanilla Git”, before we started our work, many of the commands would take 30 minutes up to hours and a few would never complete. The fact that most of them are less than 20 seconds is a huge step but it still sucks if you have to wait 10-15 seconds for everything. When we first rolled it out, the results were much better. That’s been one of our key learnings. If you read my post that introduced GVFS, you’ll see I talked about how we did work in Git and GVFS to change many operations from being proportional to the number of files in the repo to instead be proportional to the number of files “read”. It turns out that, over time, engineers crawl across the code base and touch more and more stuff leading to a problem we call “over hydration”. Basically, you end up with a bunch of files that were touched at some point but aren’t really used any longer and certainly never modified. This leads to a gradual degradation in performance. Individuals can “clean up” their enlistment but that’s a hassle and people don’t, so the system gets slower and slower.
Great quote from him here.
No comments yet
Contribute on Hacker News ↗