git works just fine with large files. The problem is that when you clone a repo, or pull, by default it gets everything, including large files deep in the history that you probably don't care about anymore.
That was actually an initial selling point of git: you have the full history locally. You can work from the plane/train/deserted island just fine.
These large files will persist in the repo forever. So people look for options to segregate large files out so that they only get downloaded on demand (aka "lazily").
All the existing options (submodules, LFS, partial clones) are different answers to "how do we make certain files only download on demand"
IIRC, it take ages for it to index a large folder. I was trying to use it to store the diff of my backup folder that constantly get rclone'd and rsync'd over in case those fucked up catastrophically
git works just fine with large files. The problem is that when you clone a repo, or pull, by default it gets everything, including large files deep in the history that you probably don't care about anymore.
That was actually an initial selling point of git: you have the full history locally. You can work from the plane/train/deserted island just fine.
These large files will persist in the repo forever. So people look for options to segregate large files out so that they only get downloaded on demand (aka "lazily").
All the existing options (submodules, LFS, partial clones) are different answers to "how do we make certain files only download on demand"
No, git does not work 'just fine' with large files. It works like ass.
> All the existing options
Don't forget sparse checkouts!
IIRC, it take ages for it to index a large folder. I was trying to use it to store the diff of my backup folder that constantly get rclone'd and rsync'd over in case those fucked up catastrophically