Comment by com2kid
3 days ago
At one point source depot was Toincredibly advanced, and there are still features that it had that git doesn't. Directory mapping being a stand out feature! Being able to only pull down certain directories from a depot and also remap where they are locally, and even have the same file be in multiple places. Makes sharing dependencies across multiple projects really easy, and a lot of complicated tooling around "monorepos" wouldn't need to exist if git supported directory mapping.
(You can get 80% of the way there with symlinks but in my experience they eventually break in git when too many different platforms making commits)
Also at one point I maintained an obscenely advanced test tool at MS, it pounded through millions of test cases across a slew of CPU architectures, intermingling emulators and physical machines that were connected to dev boxes hosting test code over a network controlled USB switch. (See: https://meanderingthoughts.hashnode.dev/how-microsoft-tested... for more details!)
Microsoft had some of the first code coverage tools for C/C++, spun out of a project from Microsoft Research.
Their debuggers are still some of the best in the world. NodeJS debugging in 2025 is dog shit compared to C# debugging in 2005.
I never understood the value of directory mapping when we used Perforce. It only seemed to add complexity when one team checked out code in different hierarchies and then some builds worked, some didn’t. Git was wonderful for having a simple layout.
You might feel differently if you worked on just a few directories in a giant repo. Sparse client views were a great feature of SD.
I'm in exactly this situation with Perforce today, and I still hate it. The same problem OP described applies - you need to know which exact directories to check out to build, run tests etc successfully. You end up with wikis filled with obscure lists of mappings, many of them outdated, some still working but including a lot of cruft because people just copy it around. Sometimes the required directories change over time and your existing workspaces just stop working.
Git has sparse client views with VFS these days.
As always, git's answer to the problem is "stop being afraid of `git submodule`."
Cross-repo commits are not a problem as long as you understand "it only counts as truly committed if the child repo's commit is referenced from the parent repo".
Git submodules are awful. Using subversion's own submodule system should be mandatory for anyone claiming Git's implementation is somehow worthwhile or good.
> it only counts as truly committed if the child repo's commit is referenced from the parent repo
This is a big problem in my experience. Relying on consumers of your dependency to upgrade their submodules isn't realistic.
Ok, but now tell me your real thoughts on sysgen. ;-)
> git supported directory mapping.
Is this a "git" failure or a "Linux filesystems suck" failure?
It seems like "Linux fileystems" are starting to creak under several directions (Nix needing binary patching, atomic desktops having poor deduplication, containers being unable to do smart things with home directories or too many overlays).
Would Linux simply sucking it up and adopting ZFS solve this or am I missing something?
How is that related? I don’t think anyone would suggest ntfs is a better fit for these applications. It worked because it was a feature of the version control software, not because of file system features.
What would ZFS do for those issues? I guess maybe deduplication, but otherwise I'm not thinking of anything that you can't do with mount --bind and overlays (and I'm not even sure ZFS would replace overlays)
Snapshots seems to be a cheap feature in ZFS but are expensive everywhere else, for example.
OverlayFS has had performance issues on Linux for a while (once you start composing a bunch of overlays, the performance drops dramatically as well as you start hitting limits on number of overlays).
1 reply →