← Back to context

Comment by IshKebab

2 days ago

LFS is bad. The server implementations suck. It conflates object contents with the storage method. It's opt-in, in a terrible way - if you do the obvious thing you get tiny text files instead of the files you actually want.

I dunno if their solution is any better but it's fairly unarguable that LFS is bad.

It does seem like this proposal has exactly the same issue. Unless this new method blocks cloning when unable to access the promisors, you'll end up with similar problems of broken large files.

  • How so? This proposal doesn’t require you to run `git lfs install` to get the correct files…

    • And what happens when an object is missing from the cloud storage or that storage has been migrated multiple times and someone turns down the old storage that’s needed for archival versions?

      2 replies →

I think maybe not storing large files in repo but managing those separately.

Mostly I did not run into such use case but in general I don’t see any upsides trying to shove some big files together with code within repositories.

  • That is a complete no-go for many use cases. Large files can have exactly the same use cases as your code: you need to branch them, you need to know when and why they changed, you need to check how an old build with an old version of the large file worked, etc. Just because code tends to be small doesn't mean that all source files for a real program are going to be small too.

    • Yeah but GIT is not the tool for that.

      That is why I don’t understand why people „need to use GIT”.

      You still can make something else like keeping versions and keeping track of those versions in many different ways.

      You can store a reference in repo like a link or whatever.

      8 replies →