Comment by fulafel
5 hours ago
Here's a web source about how much cpu time it took from 5 years ago: https://github.blog/open-source/git/git-clone-a-data-driven-...
Using the slowest clone method they measured 8s for a 750 MB repo, 0.45s for a 40MB repo. appears to be linear so 1.1s for 100MB should be a valid interpolation.
So doing 30 of those per second only takes 33 cores. Servers have hundreds of cores now (eg 384 cores: https://www.phoronix.com/review/amd-epyc-9965-linux-619).
And remember we're using worst case assumptions in places (using the slowest clone method, and numbers from old hardware). In practice I'd bet a fastish laptop would suffice.
edit: actually on closer look at the github reported numbers the interpolation isn't straightforward: on the bigger 750MB repo the partial clone is actually said to be slower then the base full clone. However this doesn't change the big picture that it'll easily fit on one server.
One, expensive, server.
.. or a cheaper one as we would be using only tens of cores in the above scenario. Or you could use a slice of an existing server using virtualization.