Always nice to read a new retelling of this old story.
TFA throws some shade at how "a single get of the office repo took some hours" then elides the fact that such an operation was practically impossible to do on git at all without creating a new file system (VFS). Perforce let users check out just the parts of a repo that they needed, so I assume most SD users did that instead of getting every app in the Office suite every time. VFS basically closes that gap on git ("VFS for Git only downloads objects as they are needed").
Perforce/SD were great for the time and for the centralised VCS use case, but the world has moved on I guess.
Some companies have developed their own technology like VFS for use with Perforce, so you can check out the entire suite of applications but only pull the files when you try to access them in a specific way. This is a lot more important in game development where massive source binary assets are stored along side text files.
It uses the same technology that's built into Windows that the remote drive programs (probably) use.
Personally I kind of still want some sort of server based VCS which can store your entire companies set of source without needing to keep the entire history locally when you check out something. But unfortunately git is still good enough to use on an ad-hoc basis between machines for me that I don't feel the need to set up a central server and CI/CD pipeline yet.
Also being able to stash, stage hunks, and interactively rebase commits are features that I like and work well with the way I work.
Doesn’t SVN let you check out and commit any folder or file at any depth of a project you choose? Maybe not the checkouts and commit, but that log history for a single subtree is something I miss from the SVN tooling.
My firm still uses perforce and I can't say anyone likes it at this point. You can almost see the light leaves the eyes of new hires when you tell them we don't use git like the rest of the world.
Yeah it's an issue for new devs for sure. TFA even makes the point, "A lot of people felt refreshed by having better transferable skills to the industry. Our onboarding times were slashed by half".
I cannot believe that new hires would be upset by the choice of version control software. They joined a new company after so many hoops and it's on them for having an open mind towards processes and tools in the new company.
VFS does not replace Perforce. Most AAA game companies still use Perforce. In particular, they need locks on assets so two people don't edit them at the same time and have an unmergable change and wasted time as one artist has to throw their work away
I'm a bit surprised git doesn't offer a way to checkout only specific parts of the git tree to be honest. It seems like it'd be pretty easy to graft on with an intermediate service that understands object files, etc.
I spent nearly a week of my Microsoft internship in 2016 adding support for Source Depot to the automated code reviewer that I was building (https://austinhenley.com/blog/featurestheywanted.html) despite having no idea what Source Depot was!
Quite a few devs were still using it even then. I wonder if everything has been migrated to git yet.
VSS was picked up via the acquisition of One Tree Software in Raleigh. Their product was SourceSafe, and the "Visual" part was added when it was bundled with their other developer tools (Visual C, Visual Basic, etc). Prior to that Microsoft sold a version control product called "Microsoft Delta" which was expensive and awful and wasn't supported on NT.
One of the people who joined Microsoft via the acquisition was Brian Harry, who led the development of Team Foundation Version Control (part of Team Foundation Server - TFS) which used SQL Server for its storage. A huge improvement in manageability and reliability over VSS. I think Brian is retired now - his blog at Microsoft is no longer being updated.
From my time using VSS, I seem to recall a big source of corruption was it's use of network file locking over SMB. If there were a network glitch (common in the day) you'd have to repair your repository. We set up an overnight batch job to run the repair so we could be productive in the mornings.
I used VSS in the 90s as well, it was a nightmare when working in a team. As I recall, Microsoft themselves did not use VSS internally, at least not for the majority of things.
Yes, I used VSS as a solo developer in the 90s. It was a revelation at the time. I met other VCS systems at grad school (RCS, CVS).
I started a job at MSFT in 2004 and I recall someone explaining that VSS was unsafe and prone to corruption. No idea if that was true, or just lore, but it wasn't an option for work anyway.
The integration with sourcesafe and all of the tools was pretty cool back then. Nothing else really had that level of integration at the time. However, VSS was seriously flakey. It would corrupt randomly for no real reason. Daily backups were always being restored in my workplace. Then they picked PVCS. At least it didnt corrupt itself.
I think VSS was fine if you used it on a local machine. If you put it on a network drive things would just flake out. It also got progressively worse as newer versions came out. Nice GUI, very straight forward to teach someone how to use it (checkout file, change, check in like a book), random corruptions about sums up VSS. That checkin/out model seems simpler for people to grasp. The virtual/branch systems most of the other ones use is kind of a mental block for many until they grok it.
It's an absurd understatement. The only people that seriously used VSS and didn't see any corruption were the people that didn't look at their code history.
I used VSS for a few years back in the late 90's and early 2000's. It was better than nothing - barely - but it was very slow, very network intensive (think MS Access rather than SQL), it had very poor merge primitives (when you checked out a file, nobody else could change it), and yes, it was exceedingly prone to corruption. A couple times we just had to throw away history and start over.
My memory is fuzzy on this but I remember VSS trusting the client for its timestamps and everything getting corrupted when someone's clock was out of sync. Which happened regularly because NTP didn't work very well on Windows back in the early 2000s.
Is that what inspired the "Exchange: The Most Feared and Loathed Team in Microsoft" license plate frames? I'm probably getting a bit of the wording wrong. It's been nearly 20 years since I saw one.
> Authenticity mattered more than production value.
Thanks for sharing this authentic story! As an ex-MSFT in a relatively small product line that only started switching to Git from SourceDepot in 2015, right before I left, I can truly empathize with how incredible a job you guys have done!
I spent a couple years at Microsoft and our team used Source Depot because a lot of people thought that our products were special and even Microsoft's own source control (TFS at the time) wasn't good enough.
I had used TFS at a previous job and didn't like it much, but I really missed it after having to use Source Depot.
I was surprised that TFS was not mentioned in the story (at least not as far as I have read).
It should have existed around the same time and other parts of MS were using it. I think it was released around 2005 but MS probably had it internally earlier.
We used it. We knew no better. It was different then, you might not hear about alternatives unless you went looking for them. Source Safe was integrated with Visual Studio so was an obvious choice for small teams.
Get this; if you wanted to change a file you had to check it out. It was then locked and no-one else could change it. Files were literally read only on your machine unless you checked them out. The 'one at a time please' approach to Source Control (the other approach being 'lets figure out how to merge this later')
Lucky you. Definitely one of the worst tools I’ve had the displeasure of working with. Made worse by people building on top of it for some insane reason.
I want to thank dev leads who trained this green-behind-the-ears engineer on mysteries of Source Depot. Once I understood it, it was quite illuminating. I am glad we only had a dependency on WinCE and IE, and so the clone only took 20 minutes instead of days. I don't remember your names but I remember your willingness to step up and help and onboard new person so they could start being productive. I pay this attitude forward with new hires here in my team no matter where I go.
funny how most folks remember the git migration as a tech win
but honestly the real unlock was devs finally having control over their own flow
no more waiting on sync windows, no more asking leads for branch access
suddenly everyone could move fast without stepping on each other
that shift did more for morale than any productivity dashboard ever could
git didn’t just fix tooling, it fixed trust in the dev loop
With a product like this that spans many decades would the source repo contain all of these versions and the changes over time. For instance word 97 - 2000 - 2003 - 2007, etc..
I would hope they forked the repo for each new version, to keep the same core but being free to refactor huge parts without affecting previous versions.
I feel like we're well into the longtail now. Are there other SCM systems or is it the end of history for source control and git is the one and done solution?
Mercurial still has some life to it (excluding Meta’s fork of it), jj is slowly gaining, fossil exists.
And afaik P4 still does good business, because DVCS in general and git in particular remain pretty poor at dealing with large binary assets so it’s really not great for e.g. large gamedev. Unity actually purchased PlasticSCM a few years back, and has it as part of their cloud offering.
Google uses its own VCS called Piper which they developed when they outgrew P4.
I've heard this about game dev before. My (probably only somewhat correct) understanding is it's more than just source code--are they checking in assets/textures etc? Is perforce more appropriate for this than, say, git lfs?
There are some other solutions (like jujutsu, which while using git as storage medium, has some differences in the handling of commits). But I do believe we reached a critical point where git is the one stop shop for all the source control needs despite it's flaws/complexity.
git by itself is often unsuitable for XL codebases. Facebook, Google, and many other companies / projects had to augment git to make it suitable or go with a custom solution.
AOSP with 50M LoC uses a manifest-based, depth=1 tool called repo to glue together a repository of repositories. If you’re thinking “why not just use git submodules?”, it’s because git submodules has a rough UX and would require so much wrangling that a custom tool is more favorable.
In general, the philosophy of distributed VCS being better than centralized is actually quite questionable. I want to know what my coworkers are up to and what they’re working on to avoid merge conflicts. DVCS without constant out-of-VCS synchronization causes more merge hell. Git’s default packfile settings are nightmarish — most checkouts should be depth==1, and they should be dynamic only when that file is accessed locally. Deeper integrations of VCS with build systems and file systems can make things even better. I think there’s still tons of room for innovation in the VCS space. The domain naturally opposes change because people don’t want to break their core workflows.
It's interesting to point out that almost all of Microsoft's "augmentations" to git have been open source and many of them have made it into git upstream already and come "ready to configure" in git today ("conical" sparse checkouts, a lot of steady improvements to sparse checkouts, git commit-graph, subtle and not-so-subtle packfile improvements, reflog improvements, more). A lot of it is opt-in stuff because of backwards compatibility or extra overhead that small/medium-sized repos won't need, but so much of it is there to be used by anyone, not just the big corporations.
I think it is neat that at least one company with mega-repos is trying to lift all boats, not just their own.
git submodules have a bad ux but it's certainly not worse than Android's custom tooling. I understand why they did it but in retrospect that seems like an obvious mistake to me.
> In the early 2000s, Microsoft faced a dilemma. Windows was growing enormously complex, with millions of lines of code that needed versioning. Git? Didn’t exist. SVN? Barely crawling out of CVS’s shadow.
I wonder if Microsoft ever considered using BitKeeper, a commercial product that began development in 1998 and had its public release in 2000. Maybe centralized systems like Perforce were the norm and a DVCS like BitKeeper was considered strange or unproven?
We did migrate from Perforce to Git for a fairly large repositories, and I can relate to some of the issues. Luckily we did not had to invent VFS, although git-lfs was useful for large files.
We communicated the same information through multiple channels: weekly emails, Teams, wiki docs, team presentations, and office hours. The rule: if something was important, people heard it at least 3 times through different mediums.
If only this were standard. Last week I received the only notification that a bunch of internal systems were being deleted in two weeks. No scream test, no archiving, just straight deletion. Sucks to be you if you missed the email for any reason.
Every month or two, we get notifications along the FINAL WARNING lines, telling us about some critical system about to be deleted, or some new system that needs to be set up Right Now, because it is a Corporate Standard (that was never rolled out properly), and by golly we have had enough of teams ignoring us, the all powerful Board has got its eyes on you now.
It's a full time job to keep up with the never-ending churn. We could probably just spend all our engineering effort being compliant and never delivering features :)
Company name withheld to preserve my anonymity (100,000+ employees).
Even with this, there were many surprised people. I'm still amazed at all of the people that can ignore everything and just open their IDE and code (and maybe never see teams or email)
If you read all the notifications you'll never do your actual job. People who just open their IDE and code are to be commended in some respects - but it's a balance of course.
Alternatively, communications fatigue. How many emails does the average employee get with nonsense that doesn't apply to them? Oh cool, we have a new VP. Oh cool, that department had a charity drive. Oh cool, system I've never heard of is getting replaced by a new one, favourite of this guy I've never heard of.
Add in the various spam (be it attacks or just random vendors trying to sell something).
At some point, people start to zone out and barely skim, if that, most of their work emails. Same with work chats, which are also more prone to people sharing random memes or photos from their picnic last week or their latest lego set.
No kidding. The amount of things that change in important environments without anyone telling people outside their teams in some organizations can be maddening.
What we do is we scream the day before, all of us, get replied that we should have read the memo, reply we have real work to do, and the thing gets cancelled last minute, a few times a year, until nobody gives a fuck anymore.
Source Depot was based on Perforce. Microsoft bought a license for the Perforce source code and made changes to work at Microsoft scale (Windows, Office).
TFS was developed in the Studio team. It was designed to work on Microsoft scale and some teams moved over to it (SQL server). It was also available as a fairly decent product (leagues better than SourceSafe).
We had a similar setup, also with a homegrown VCS developed internally in our company, where I sometimes acted as branch admin. I’m not sure it worked exactly like Source Depot, but I can try to explain it.
Basically instead of everyone creating their own short-lived branches (expensive operation), you would have long-lived branches that a larger group of people would commit to (several product areas). The branch admins job was then to get the work all of these people forward integrated to a branch upwards in the hierarchy. This was attempted a few times per day, but if tests failed you would have to reach out to the responsible people to get those test fixed. Then later, when you get the changes merged upwards, some other changes have also been made to the main integration branch, and now you need to pull these down into your long lived branch - reverse integration - such that your branch is up to date with everyone else in the company.
At least in the Windows group, we use ri and fi oppositely from how you describe. RI = sharing code with a broader group of people toward trunk. FI = absorbing code created by the larger group of people on the dev team. Eventually we do a set of release forks that are isolated after a final set of FIs, so really outside customers get code via FI and then cherry pick style development.
RI/FI is similar to having long-lived branches in Git. Imagine you have a "develop-word" branch in git. The admins for that branch would merge all of the changes of their code to "main" and from "main" to their long lived branches. It was a little bit different than long-lived git branches as they also had a file filter (my private branch only had onenote code and it was the "onenote" branch)
I've long wanted a hosted Git service that would help me maintain long lived fork branches. I know there's some necessary manual work that is occasionally required to integrate patches, but the existing tooling that I'm familiar with for this kind of thing is overly focused on Debian packaging (quilt, git-buildpackage) and has horrifyingly poor ergonomics.
I'd love a system that would essentially be a source control of my patches, while also allowing a first class view of the upstream source + patches applied, giving me clear controls to see exactly when in the upstream history the breakages were introduced, so that I'm less locking in precise upstream versions that can accept the patches, and more actively engaging with ranges of upstream commits/tags.
I can't imagine how such a thing would actually be commercially useful, but darned if would be an obvious fit for AI to automatically examine the upstream and patch history and propose migrations.
What were the biggest hurdles?
Where did Git fall short?
How did you structure the repo(s)?
Where there many artifacts that went into integration with GitLFS?
I actually remember using Perforce back in like 2010 or something. And I can't remember why or for which client or employer. I just remember it was stupid.
I used Perforce a lot in the 90s, when it was simple (just p4, p4d, and p4merge!), super fast, and
never crashed or corrupted itself. Way simpler, and easier to train newbies on, than any of the alternatives.
Subdirectories-as-branches (like bare repo + workspace-per-branch practices w/git) is so much easier for average computer users to grok, too.
Very easy to admin too.
No idea what the current "enterprisey" offering is like, though.
For corporate teams, it was a game changer. So much better than any alternative at the time.
We're all so used to git that we've become used to it's terribleness and see every other system as deficient. Training and supporting a bunch of SWE-adjacent users (hw eng, ee, quality, managers, etc) is a really, really good reality check on how horrible the git UX and datamodel is (e.g. obliterating secrets--security, trade, or PII/PHI--that get accidentally checked in is a stop-the-world moment).
For the record, I happily use git, jj, and Gitea all day every day now (and selected them for my current $employer). However, also FTR, I've used SCCS, CVS, SVN, VSS, TFS and MKS SI professionally, each for years at a time.
All of the comments dismissing tools that are significantly better for most use cases other than distributed OSS, but lost the popularity contest, is shortsighted.
Git has a loooong way to go before it's as good in other ways as many of its "competitors". Learning about their benefits is very enlightening.
And, IIRC, p4 now integrates with git, though I've never used it.
I've used CVS, SVN, TFS, Mercurial, and Git in the past, so I have plenty of exposure to different options. I have to deal with Perforce in my current workplace and I have to say that even from this perspective it's honestly pretty bad in terms of how convoluted things are.
Perforce is really nice if you need to source control 16k textures next to code without thinking too much about it. Git LFS absolutely works but it's more complicated and has less support in industry tooling. Perforce also makes it easier to purge (obliterate) old revisions of files without breaking history for everyone. This can be invaluable if your p4 server starts to run out of disk space.
The ability to lock files centrally might seem outdated by the branching and PR model, but for some organizations the centralized solution works way better because they have built viable business processes around it. Centralized can absolutely smoke distributed in terms of iteration latency if the loop is tight enough and the team is cooperating well.
I agree with everything you say except git-lfs works. For modern game dev (where a full checkout is around 1TB of data) git-lfs is too slow, too error prone and too wasteful of disk space.
Perforce is a complete PITA to work with, too expensive and is outdated/flawed for modern dev BUT for binary files it's really the only game in town (closely followed by svn but people have forgotten how good svn was and only remember how bad it was at tracking branch merging).
I would say it's no more convoluted and confusing than git. I used Perforce professionally for quite a few years in gamedev, and found that a bit confusing at first. Then I was self-employed and used git, and coming to git from Perforce I found it very confusing at first. But then I grew to love it. Now I'm back to working for a big gamedev company and we use Perforce and I feel very proficient in both.
One thing I find annoying about these Perforce hate stories: yes it's awkward to branch in Perforce. It is also the case that there is no need to ever create a branch for feature development when you use Perforce. It's like complaining that it is hard to grate cheese with a trumpet. That just isn't applicable.
> Microsoft had to collaborate with GitHub to invent the Virtual File System for Git (VFS for Git) just to make this migration possible. Without VFS, a fresh clone of the Office repository (a shallow git clone would take 200 GB of disk space) would take days and consume hundreds of gigabytes.
It takes less than an hour on my third world apartment wifi to download Call of Duty Modern Warfare remake which is over 200 gygabytes. Since we're not talking about remote work here, I think Microsoft offices and servers (probably on local network) might have managed similar bandwidth back then.
Having had yesterday the dubious pleasure of using MS Word for the first time in a decade, I can safely affirm that they could have have just piped the whole Office repo to the Windows equivalent of /dev/null and nothing of value would have been lost.
If it were that simple, would 100s of engineers spend so much time and effort? They did what they have to and spent the time and energy to maintain some semblance of commit and change history.
GP has a valid point. We had a Git repo managed in BitBucket that was gigantic because it contained binary files and the team didn’t know about LFS and storing them in an external tool like Artifactory. So checkouts took forever and even with shallow clones it took forever. With a CI/CD system running constantly and tests needing constant full coverage and hundreds of developers well it eats into developers time. We can’t just prune all the branches well because of compliance rules.
So we ended up removing all the binary artifacts before cloning into a new repo then making the old repo as read only.
Microsoft seemed to want to mirror everything rather than keep source depot alive.
We had another case where we had a subversion system that went out of security compliance that we simply ported to our git systems and abandoned it.
So my guess is they wanted everything to look the same and not just importing the code.
In about 2010, I briefly had a contract with a security firm with one dev, and there was no source control, and everything written was in low quality PHP. I quit after a week.
php_final_final_v2.zip shipped to production. A classic. I had a similar experience with https://www.ioncube.com/ php encryption. Everything encrypted and no source control.
> Today, as I type these words, I work at Snowflake. Snowflake has around ~2,000 engineers. When I was in Office, Office alone was around ~4,000 engineers.
Excel turns 40 this year and has changed very little in those four decades. I can't imagine you need 4,000 engineers just to keep it backwards compatible.
In the meantime we've seen entire companies built with a ragtag team of hungry devs.
This article makes out thousands of engineers that are good enough to qualify at Microsoft and work on Office but haven't used git yet? That sounds a bit overplayed tbh, if you haven't used git you must live under a rock. You can't use Source Depot at home.
Something not touched on by others. The standard Microsoft contract outlawed any moonlighting for years, any code you created was potentially going to be claimed by Microsoft - so you didn't feel safe working on side projects or contributing to open source.
Open source code was a pariah - you were warned unless you had an exception to never look at any open source code even vaguely related to your projects, including in personal time, for fear of opening up Microsoft to legal trouble.
In the context of this, when and why would the average dev get time to properly use git - no just get a shallow understanding, but use it at the complexity level needed for an large internal mono-repo ported to it.
I've used git Microsoft for years, but using git with Office client is totally different. I believe it's used differently, with very different expecations in Windows.
You’d be surprised at the amount of people at Microsoft that their entire career have been at Microsoft (pre-git-creation) that never used Git. Git is relatively new (2005) but source control systems are not.
It's entirely plausible that a long-term engineer at Microsoft wouldn't have have used git. I'm sure a considerable number of software engineers don't program as a hobby.
It only takes a week to learn enough git to get by, and only a month or two to become every-day use proficient. Especially if one is already familiar with perforce, or svn, or other VCS.
Yes, there is a transition, no it isn't really that hard.
Anyone who views lack of git experience as a gap in a CV is selecting for the wrong thing.
Its oddly fascinating that Microsoft has managed to survive for so long with ancient/bad tools for software engineering. Almost like “life finds a way” but for software dev. From the outside it seems like they are doing better now after embracing OSS/generic dev tools.
At one point source depot was Toincredibly advanced, and there are still features that it had that git doesn't. Directory mapping being a stand out feature! Being able to only pull down certain directories from a depot and also remap where they are locally, and even have the same file be in multiple places. Makes sharing dependencies across multiple projects really easy, and a lot of complicated tooling around "monorepos" wouldn't need to exist if git supported directory mapping.
(You can get 80% of the way there with symlinks but in my experience they eventually break in git when too many different platforms making commits)
Also at one point I maintained an obscenely advanced test tool at MS, it pounded through millions of test cases across a slew of CPU architectures, intermingling emulators and physical machines that were connected to dev boxes hosting test code over a network controlled USB switch. (See: https://meanderingthoughts.hashnode.dev/how-microsoft-tested... for more details!)
Microsoft had some of the first code coverage tools for C/C++, spun out of a project from Microsoft Research.
Their debuggers are still some of the best in the world. NodeJS debugging in 2025 is dog shit compared to C# debugging in 2005.
I never understood the value of directory mapping when we used Perforce. It only seemed to add complexity when one team checked out code in different hierarchies and then some builds worked, some didn’t. Git was wonderful for having a simple layout.
As always, git's answer to the problem is "stop being afraid of `git submodule`."
Cross-repo commits are not a problem as long as you understand "it only counts as truly committed if the child repo's commit is referenced from the parent repo".
Is this a "git" failure or a "Linux filesystems suck" failure?
It seems like "Linux fileystems" are starting to creak under several directions (Nix needing binary patching, atomic desktops having poor deduplication, containers being unable to do smart things with home directories or too many overlays).
Would Linux simply sucking it up and adopting ZFS solve this or am I missing something?
Google used Perforce for years and I think Piper still has basically the same interface? So no, MSFT wasn’t ridiculously behind the times by using Source Depot for so long.
Let’s not forget that Microsoft developed a lot of tools in the first place, as in, they were one of the companies that created things that didn’t really exist before Microsoft created them.
Git isn’t even very old, it came out in 2005. Microsoft Office first came out in 1990. Of course Office wasn’t using git.
Some examples would be useful here. Not knocking MS tools in general but are there any that were industry fists? Source code control for example existed at least since SCCS which in turn predates Microsoft itself.
Always nice to read a new retelling of this old story.
TFA throws some shade at how "a single get of the office repo took some hours" then elides the fact that such an operation was practically impossible to do on git at all without creating a new file system (VFS). Perforce let users check out just the parts of a repo that they needed, so I assume most SD users did that instead of getting every app in the Office suite every time. VFS basically closes that gap on git ("VFS for Git only downloads objects as they are needed").
Perforce/SD were great for the time and for the centralised VCS use case, but the world has moved on I guess.
Some companies have developed their own technology like VFS for use with Perforce, so you can check out the entire suite of applications but only pull the files when you try to access them in a specific way. This is a lot more important in game development where massive source binary assets are stored along side text files.
It uses the same technology that's built into Windows that the remote drive programs (probably) use.
Personally I kind of still want some sort of server based VCS which can store your entire companies set of source without needing to keep the entire history locally when you check out something. But unfortunately git is still good enough to use on an ad-hoc basis between machines for me that I don't feel the need to set up a central server and CI/CD pipeline yet.
Also being able to stash, stage hunks, and interactively rebase commits are features that I like and work well with the way I work.
Doesn’t SVN let you check out and commit any folder or file at any depth of a project you choose? Maybe not the checkouts and commit, but that log history for a single subtree is something I miss from the SVN tooling.
4 replies →
My firm still uses perforce and I can't say anyone likes it at this point. You can almost see the light leaves the eyes of new hires when you tell them we don't use git like the rest of the world.
Can't say anything about perforce as I've never used it, but I'd give my left nut to get Google's Piper instead of git at work :)
2 replies →
Yeah it's an issue for new devs for sure. TFA even makes the point, "A lot of people felt refreshed by having better transferable skills to the industry. Our onboarding times were slashed by half".
5 replies →
I cannot believe that new hires would be upset by the choice of version control software. They joined a new company after so many hoops and it's on them for having an open mind towards processes and tools in the new company.
33 replies →
VFS does not replace Perforce. Most AAA game companies still use Perforce. In particular, they need locks on assets so two people don't edit them at the same time and have an unmergable change and wasted time as one artist has to throw their work away
I'm a bit surprised git doesn't offer a way to checkout only specific parts of the git tree to be honest. It seems like it'd be pretty easy to graft on with an intermediate service that understands object files, etc.
It's existed for a while. Partial clones and LFS.
https://git-scm.com/docs/partial-clone
1 reply →
I spent nearly a week of my Microsoft internship in 2016 adding support for Source Depot to the automated code reviewer that I was building (https://austinhenley.com/blog/featurestheywanted.html) despite having no idea what Source Depot was!
Quite a few devs were still using it even then. I wonder if everything has been migrated to git yet.
Naah still a lot of stuff works on sd !! Those sd commands and setting up sd gives me chills !!
I miss CodeFlow everyday. It was such a great tool to use.
CodeFlow lives on and is still held in high regard. It's even made it's way to support the github repos, not just git. https://chromewebstore.google.com/detail/codeflow/aphnoipoco...
Still buried as internal only though.
Most of the day to day is in git, now.
Having used vss in the 90s myself, it surprised me it wasn't even mentioned.
VSS (Visual SourceSafe) being Microsoft's own source versioning protocol, unlike Source Depot which was licensed from Perforce.
VSS was picked up via the acquisition of One Tree Software in Raleigh. Their product was SourceSafe, and the "Visual" part was added when it was bundled with their other developer tools (Visual C, Visual Basic, etc). Prior to that Microsoft sold a version control product called "Microsoft Delta" which was expensive and awful and wasn't supported on NT.
One of the people who joined Microsoft via the acquisition was Brian Harry, who led the development of Team Foundation Version Control (part of Team Foundation Server - TFS) which used SQL Server for its storage. A huge improvement in manageability and reliability over VSS. I think Brian is retired now - his blog at Microsoft is no longer being updated.
From my time using VSS, I seem to recall a big source of corruption was it's use of network file locking over SMB. If there were a network glitch (common in the day) you'd have to repair your repository. We set up an overnight batch job to run the repair so we could be productive in the mornings.
> ...I seem to recall a big source of corruption was it's use of network file locking over SMB...
Shared database files (of any kind) over SMB... shudder Those were such bad days.
1 reply →
Oh, TIL! Thanks for adding that to the story.
Indeed my experiences of vss was also not amazing and certainly got corrupted files too.
I used VSS in the 90s as well, it was a nightmare when working in a team. As I recall, Microsoft themselves did not use VSS internally, at least not for the majority of things.
That’s correct. Before SD, Microsoft orgs (at least Office and Windows; I assume others too) used an internal tool called SLM (“slime”); Raymond Chen has blogged about it, in passing: https://devblogs.microsoft.com/oldnewthing/20180122-00/?p=97...
Yes, I used VSS as a solo developer in the 90s. It was a revelation at the time. I met other VCS systems at grad school (RCS, CVS).
I started a job at MSFT in 2004 and I recall someone explaining that VSS was unsafe and prone to corruption. No idea if that was true, or just lore, but it wasn't an option for work anyway.
The integration with sourcesafe and all of the tools was pretty cool back then. Nothing else really had that level of integration at the time. However, VSS was seriously flakey. It would corrupt randomly for no real reason. Daily backups were always being restored in my workplace. Then they picked PVCS. At least it didnt corrupt itself.
I think VSS was fine if you used it on a local machine. If you put it on a network drive things would just flake out. It also got progressively worse as newer versions came out. Nice GUI, very straight forward to teach someone how to use it (checkout file, change, check in like a book), random corruptions about sums up VSS. That checkin/out model seems simpler for people to grasp. The virtual/branch systems most of the other ones use is kind of a mental block for many until they grok it.
I was mandated to use VSS in a university course in the late 90s -- one course, one project -- and we still managed to corrupt it.
> No idea if that was true
It's an absurd understatement. The only people that seriously used VSS and didn't see any corruption were the people that didn't look at their code history.
I used VSS for a few years back in the late 90's and early 2000's. It was better than nothing - barely - but it was very slow, very network intensive (think MS Access rather than SQL), it had very poor merge primitives (when you checked out a file, nobody else could change it), and yes, it was exceedingly prone to corruption. A couple times we just had to throw away history and start over.
2 replies →
My memory is fuzzy on this but I remember VSS trusting the client for its timestamps and everything getting corrupted when someone's clock was out of sync. Which happened regularly because NTP didn't work very well on Windows back in the early 2000s.
We used to call it Visual Source Unsafe because it was corrupting repos all the time.
2 replies →
I was on the team that migrated Microsoft from XNS to TCP/IP - it was way less involved, but similar lessons learned.
Migrating from MSMAIL -> Exchange, though - that was rough
Is that what inspired the "Exchange: The Most Feared and Loathed Team in Microsoft" license plate frames? I'm probably getting a bit of the wording wrong. It's been nearly 20 years since I saw one.
Probably. A lot of people really loved MSMAIL; not so much Exchange.
I have more long, boring stories about projects there, but that’s for another day
8 replies →
> Authenticity mattered more than production value.
Thanks for sharing this authentic story! As an ex-MSFT in a relatively small product line that only started switching to Git from SourceDepot in 2015, right before I left, I can truly empathize with how incredible a job you guys have done!
Yeah, it was a whole journey. I can't believe it happened. Thanks for your comment.
Thank you! Btw, it reminds me of the book "Showstopper" about the journey of releasing Windows NT; highly recommended!
2 replies →
I spent a lot of time coaching people out of source depot, it was touch and go there for a while. It was worth it though thank you for Your effort.
I’d like to know when Microsoft internally migrated away from Visual SourceSafe…
They should have recalled it to avoid continued public use…
I doubt most teams ever used it.
I spent a couple years at Microsoft and our team used Source Depot because a lot of people thought that our products were special and even Microsoft's own source control (TFS at the time) wasn't good enough.
I had used TFS at a previous job and didn't like it much, but I really missed it after having to use Source Depot.
I was surprised that TFS was not mentioned in the story (at least not as far as I have read).
It should have existed around the same time and other parts of MS were using it. I think it was released around 2005 but MS probably had it internally earlier.
3 replies →
USGEO used it in the late 90s, as well as RAID
I don't know that they ever used it internally, certainly not for anything major. If they had, they probably wouldn't have sold it as it was...
Can't explain TFS though, that was still garbage internally and externally.
Around 2000? The only project I ever knew that used it was .NET and that was on SD by around then.
I didn't even know Microsoft SourceSafe existed.
We used it. We knew no better. It was different then, you might not hear about alternatives unless you went looking for them. Source Safe was integrated with Visual Studio so was an obvious choice for small teams.
Get this; if you wanted to change a file you had to check it out. It was then locked and no-one else could change it. Files were literally read only on your machine unless you checked them out. The 'one at a time please' approach to Source Control (the other approach being 'lets figure out how to merge this later')
15 replies →
Lucky you. Definitely one of the worst tools I’ve had the displeasure of working with. Made worse by people building on top of it for some insane reason.
6 replies →
It was pretty janky. We used it in the gamedev world in the 90s once the migration to Visual C started.
I want to thank dev leads who trained this green-behind-the-ears engineer on mysteries of Source Depot. Once I understood it, it was quite illuminating. I am glad we only had a dependency on WinCE and IE, and so the clone only took 20 minutes instead of days. I don't remember your names but I remember your willingness to step up and help and onboard new person so they could start being productive. I pay this attitude forward with new hires here in my team no matter where I go.
funny how most folks remember the git migration as a tech win but honestly the real unlock was devs finally having control over their own flow no more waiting on sync windows, no more asking leads for branch access suddenly everyone could move fast without stepping on each other that shift did more for morale than any productivity dashboard ever could git didn’t just fix tooling, it fixed trust in the dev loop
Not doubting it but I don't understand how a shallow clone of OneNote would be 200GB.
Shallow clone of all of office, not onenote.
Oh alright. Thanks.
Must have videos or binaries.
They probably vendor every single .dll it uses.
1 reply →
With a product like this that spans many decades would the source repo contain all of these versions and the changes over time. For instance word 97 - 2000 - 2003 - 2007, etc..
I would hope they forked the repo for each new version, to keep the same core but being free to refactor huge parts without affecting previous versions.
I feel like we're well into the longtail now. Are there other SCM systems or is it the end of history for source control and git is the one and done solution?
Mercurial still has some life to it (excluding Meta’s fork of it), jj is slowly gaining, fossil exists.
And afaik P4 still does good business, because DVCS in general and git in particular remain pretty poor at dealing with large binary assets so it’s really not great for e.g. large gamedev. Unity actually purchased PlasticSCM a few years back, and has it as part of their cloud offering.
Google uses its own VCS called Piper which they developed when they outgrew P4.
google also has a mercurial interface to piper
Perforce is used in game dev, animation, etc. git is pretty poor at dealing with lots of really large assets
I've heard this about game dev before. My (probably only somewhat correct) understanding is it's more than just source code--are they checking in assets/textures etc? Is perforce more appropriate for this than, say, git lfs?
9 replies →
why is this still the case ?
11 replies →
There are some other solutions (like jujutsu, which while using git as storage medium, has some differences in the handling of commits). But I do believe we reached a critical point where git is the one stop shop for all the source control needs despite it's flaws/complexity.
git by itself is often unsuitable for XL codebases. Facebook, Google, and many other companies / projects had to augment git to make it suitable or go with a custom solution.
AOSP with 50M LoC uses a manifest-based, depth=1 tool called repo to glue together a repository of repositories. If you’re thinking “why not just use git submodules?”, it’s because git submodules has a rough UX and would require so much wrangling that a custom tool is more favorable.
Meta uses a custom VCS. They recently released sapling: https://sapling-scm.com/docs/introduction/
In general, the philosophy of distributed VCS being better than centralized is actually quite questionable. I want to know what my coworkers are up to and what they’re working on to avoid merge conflicts. DVCS without constant out-of-VCS synchronization causes more merge hell. Git’s default packfile settings are nightmarish — most checkouts should be depth==1, and they should be dynamic only when that file is accessed locally. Deeper integrations of VCS with build systems and file systems can make things even better. I think there’s still tons of room for innovation in the VCS space. The domain naturally opposes change because people don’t want to break their core workflows.
It's interesting to point out that almost all of Microsoft's "augmentations" to git have been open source and many of them have made it into git upstream already and come "ready to configure" in git today ("conical" sparse checkouts, a lot of steady improvements to sparse checkouts, git commit-graph, subtle and not-so-subtle packfile improvements, reflog improvements, more). A lot of it is opt-in stuff because of backwards compatibility or extra overhead that small/medium-sized repos won't need, but so much of it is there to be used by anyone, not just the big corporations.
I think it is neat that at least one company with mega-repos is trying to lift all boats, not just their own.
1 reply →
git submodules have a bad ux but it's certainly not worse than Android's custom tooling. I understand why they did it but in retrospect that seems like an obvious mistake to me.
> In the early 2000s, Microsoft faced a dilemma. Windows was growing enormously complex, with millions of lines of code that needed versioning. Git? Didn’t exist. SVN? Barely crawling out of CVS’s shadow.
I wonder if Microsoft ever considered using BitKeeper, a commercial product that began development in 1998 and had its public release in 2000. Maybe centralized systems like Perforce were the norm and a DVCS like BitKeeper was considered strange or unproven?
There was SourceSafe (VSS) around that time and TFVC afterwards.
Neither SourceSafe nor TFVC were distributed version control systems (DVCS) so I'm not sure what you mean.
We did migrate from Perforce to Git for a fairly large repositories, and I can relate to some of the issues. Luckily we did not had to invent VFS, although git-lfs was useful for large files.
If only this were standard. Last week I received the only notification that a bunch of internal systems were being deleted in two weeks. No scream test, no archiving, just straight deletion. Sucks to be you if you missed the email for any reason.
I feel this.
Every month or two, we get notifications along the FINAL WARNING lines, telling us about some critical system about to be deleted, or some new system that needs to be set up Right Now, because it is a Corporate Standard (that was never rolled out properly), and by golly we have had enough of teams ignoring us, the all powerful Board has got its eyes on you now.
It's a full time job to keep up with the never-ending churn. We could probably just spend all our engineering effort being compliant and never delivering features :)
Company name withheld to preserve my anonymity (100,000+ employees).
Even with this, there were many surprised people. I'm still amazed at all of the people that can ignore everything and just open their IDE and code (and maybe never see teams or email)
If you read all the notifications you'll never do your actual job. People who just open their IDE and code are to be commended in some respects - but it's a balance of course.
In my previous company it came to me as a surprise to learn from a third party that our office had moved lol.
Alternatively, communications fatigue. How many emails does the average employee get with nonsense that doesn't apply to them? Oh cool, we have a new VP. Oh cool, that department had a charity drive. Oh cool, system I've never heard of is getting replaced by a new one, favourite of this guy I've never heard of.
Add in the various spam (be it attacks or just random vendors trying to sell something).
At some point, people start to zone out and barely skim, if that, most of their work emails. Same with work chats, which are also more prone to people sharing random memes or photos from their picnic last week or their latest lego set.
6 replies →
No kidding. The amount of things that change in important environments without anyone telling people outside their teams in some organizations can be maddening.
What we do is we scream the day before, all of us, get replied that we should have read the memo, reply we have real work to do, and the thing gets cancelled last minute, a few times a year, until nobody gives a fuck anymore.
What's the connection (if any) between "Source Depot" and TFSVC?
Source Depot was based on Perforce. Microsoft bought a license for the Perforce source code and made changes to work at Microsoft scale (Windows, Office).
TFS was developed in the Studio team. It was designed to work on Microsoft scale and some teams moved over to it (SQL server). It was also available as a fairly decent product (leagues better than SourceSafe).
None that I know of, Source Depot is derived from Perforce.
Thank goodness I don't have to use IBM's Rational Team Concert anymore. Even just thinking about it makes me shudder.
It was a great tool for losing changes!
Could someone explain the ideas of forward integration and reverse integration in Source Depot?
I’d never heard of Source Depot before today.
We had a similar setup, also with a homegrown VCS developed internally in our company, where I sometimes acted as branch admin. I’m not sure it worked exactly like Source Depot, but I can try to explain it.
Basically instead of everyone creating their own short-lived branches (expensive operation), you would have long-lived branches that a larger group of people would commit to (several product areas). The branch admins job was then to get the work all of these people forward integrated to a branch upwards in the hierarchy. This was attempted a few times per day, but if tests failed you would have to reach out to the responsible people to get those test fixed. Then later, when you get the changes merged upwards, some other changes have also been made to the main integration branch, and now you need to pull these down into your long lived branch - reverse integration - such that your branch is up to date with everyone else in the company.
At least in the Windows group, we use ri and fi oppositely from how you describe. RI = sharing code with a broader group of people toward trunk. FI = absorbing code created by the larger group of people on the dev team. Eventually we do a set of release forks that are isolated after a final set of FIs, so really outside customers get code via FI and then cherry pick style development.
RI/FI is similar to having long-lived branches in Git. Imagine you have a "develop-word" branch in git. The admins for that branch would merge all of the changes of their code to "main" and from "main" to their long lived branches. It was a little bit different than long-lived git branches as they also had a file filter (my private branch only had onenote code and it was the "onenote" branch)
I've long wanted a hosted Git service that would help me maintain long lived fork branches. I know there's some necessary manual work that is occasionally required to integrate patches, but the existing tooling that I'm familiar with for this kind of thing is overly focused on Debian packaging (quilt, git-buildpackage) and has horrifyingly poor ergonomics.
I'd love a system that would essentially be a source control of my patches, while also allowing a first class view of the upstream source + patches applied, giving me clear controls to see exactly when in the upstream history the breakages were introduced, so that I'm less locking in precise upstream versions that can accept the patches, and more actively engaging with ranges of upstream commits/tags.
I can't imagine how such a thing would actually be commercially useful, but darned if would be an obvious fit for AI to automatically examine the upstream and patch history and propose migrations.
source depot is (was?) essentially a fork of perforce.
The article mentioned something along those lines, but I’ve never used it either.
I’ve only ever really used CVS, SVN, and Git.
1 reply →
What were the biggest hurdles? Where did Git fall short? How did you structure the repo(s)? Where there many artifacts that went into integration with GitLFS?
I actually remember using Perforce back in like 2010 or something. And I can't remember why or for which client or employer. I just remember it was stupid.
I used Perforce a lot in the 90s, when it was simple (just p4, p4d, and p4merge!), super fast, and never crashed or corrupted itself. Way simpler, and easier to train newbies on, than any of the alternatives.
Subdirectories-as-branches (like bare repo + workspace-per-branch practices w/git) is so much easier for average computer users to grok, too. Very easy to admin too.
No idea what the current "enterprisey" offering is like, though.
For corporate teams, it was a game changer. So much better than any alternative at the time.
We're all so used to git that we've become used to it's terribleness and see every other system as deficient. Training and supporting a bunch of SWE-adjacent users (hw eng, ee, quality, managers, etc) is a really, really good reality check on how horrible the git UX and datamodel is (e.g. obliterating secrets--security, trade, or PII/PHI--that get accidentally checked in is a stop-the-world moment).
For the record, I happily use git, jj, and Gitea all day every day now (and selected them for my current $employer). However, also FTR, I've used SCCS, CVS, SVN, VSS, TFS and MKS SI professionally, each for years at a time.
All of the comments dismissing tools that are significantly better for most use cases other than distributed OSS, but lost the popularity contest, is shortsighted.
Git has a loooong way to go before it's as good in other ways as many of its "competitors". Learning about their benefits is very enlightening.
And, IIRC, p4 now integrates with git, though I've never used it.
I've used CVS, SVN, TFS, Mercurial, and Git in the past, so I have plenty of exposure to different options. I have to deal with Perforce in my current workplace and I have to say that even from this perspective it's honestly pretty bad in terms of how convoluted things are.
1 reply →
Perforce is really nice if you need to source control 16k textures next to code without thinking too much about it. Git LFS absolutely works but it's more complicated and has less support in industry tooling. Perforce also makes it easier to purge (obliterate) old revisions of files without breaking history for everyone. This can be invaluable if your p4 server starts to run out of disk space.
The ability to lock files centrally might seem outdated by the branching and PR model, but for some organizations the centralized solution works way better because they have built viable business processes around it. Centralized can absolutely smoke distributed in terms of iteration latency if the loop is tight enough and the team is cooperating well.
I agree with everything you say except git-lfs works. For modern game dev (where a full checkout is around 1TB of data) git-lfs is too slow, too error prone and too wasteful of disk space.
Perforce is a complete PITA to work with, too expensive and is outdated/flawed for modern dev BUT for binary files it's really the only game in town (closely followed by svn but people have forgotten how good svn was and only remember how bad it was at tracking branch merging).
2 replies →
Perforce is convoluted and confusing, but I don't think it's really fair to call it stupid. It is still virtually unmatched in a couple of areas.
I would say it's no more convoluted and confusing than git. I used Perforce professionally for quite a few years in gamedev, and found that a bit confusing at first. Then I was self-employed and used git, and coming to git from Perforce I found it very confusing at first. But then I grew to love it. Now I'm back to working for a big gamedev company and we use Perforce and I feel very proficient in both.
I wasn't being fair, I was being mean. Perforce is stupid and ugly.
There's still a lot of Perforce around. I've thankfully managed to avoid it but I have plenty of friends in the industry who still have to use it.
Perforce is still widely used in the game industry
And expensive.
One thing I find annoying about these Perforce hate stories: yes it's awkward to branch in Perforce. It is also the case that there is no need to ever create a branch for feature development when you use Perforce. It's like complaining that it is hard to grate cheese with a trumpet. That just isn't applicable.
> Microsoft Office migration from Source Depot to Git
Will they get an annoing window, in the midle of the migration, telling them that Office must be updated now, or the world will end ?
[flagged]
[flagged]
You mean like the whole office suite?
[flagged]
> Microsoft had to collaborate with GitHub to invent the Virtual File System for Git (VFS for Git) just to make this migration possible. Without VFS, a fresh clone of the Office repository (a shallow git clone would take 200 GB of disk space) would take days and consume hundreds of gigabytes.
It takes less than an hour on my third world apartment wifi to download Call of Duty Modern Warfare remake which is over 200 gygabytes. Since we're not talking about remote work here, I think Microsoft offices and servers (probably on local network) might have managed similar bandwidth back then.
2 replies →
Having had yesterday the dubious pleasure of using MS Word for the first time in a decade, I can safely affirm that they could have have just piped the whole Office repo to the Windows equivalent of /dev/null and nothing of value would have been lost.
1 reply →
Don’t do this on a repository with 35+ years of history! That’s all valuable information you want to keep.
Anything before Office 2003 you can delete. Anything after Office 2003 you can also delete. There, saved you a few terabytes.
This isn’t Reddit. It’s a lame drive-by joke. The system is working.
If it were that simple, would 100s of engineers spend so much time and effort? They did what they have to and spent the time and energy to maintain some semblance of commit and change history.
GP has a valid point. We had a Git repo managed in BitBucket that was gigantic because it contained binary files and the team didn’t know about LFS and storing them in an external tool like Artifactory. So checkouts took forever and even with shallow clones it took forever. With a CI/CD system running constantly and tests needing constant full coverage and hundreds of developers well it eats into developers time. We can’t just prune all the branches well because of compliance rules.
So we ended up removing all the binary artifacts before cloning into a new repo then making the old repo as read only.
Microsoft seemed to want to mirror everything rather than keep source depot alive.
We had another case where we had a subversion system that went out of security compliance that we simply ported to our git systems and abandoned it.
So my guess is they wanted everything to look the same and not just importing the code.
> If it were that simple, would 100s of engineers spend so much time and effort?
Taking into acvount that they rounded corners in Office, I would say, yes.
In about 2010, I briefly had a contract with a security firm with one dev, and there was no source control, and everything written was in low quality PHP. I quit after a week.
php_final_final_v2.zip shipped to production. A classic. I had a similar experience with https://www.ioncube.com/ php encryption. Everything encrypted and no source control.
What kind of security services did they provide? Breaches?
Job security for the dev, probably.
> We spent months debugging line ending handling
"Gosh, that sounds like a right mother," said Unix.
> Today, as I type these words, I work at Snowflake. Snowflake has around ~2,000 engineers. When I was in Office, Office alone was around ~4,000 engineers.
I'm sorry, what?! 4,000 engineers doing what, exactly?
Excel turns 40 this year and has changed very little in those four decades. I can't imagine you need 4,000 engineers just to keep it backwards compatible.
In the meantime we've seen entire companies built with a ragtag team of hungry devs.
The author mentions the list of products that were impacted with this migration
> Word, Excel, Powerpoint, Sway, Publisher, Access, Project, OneNote, Shared Services (OSI), Shared UX, + everything in the web
This article makes out thousands of engineers that are good enough to qualify at Microsoft and work on Office but haven't used git yet? That sounds a bit overplayed tbh, if you haven't used git you must live under a rock. You can't use Source Depot at home.
Overall good story though
Something not touched on by others. The standard Microsoft contract outlawed any moonlighting for years, any code you created was potentially going to be claimed by Microsoft - so you didn't feel safe working on side projects or contributing to open source. Open source code was a pariah - you were warned unless you had an exception to never look at any open source code even vaguely related to your projects, including in personal time, for fear of opening up Microsoft to legal trouble.
In the context of this, when and why would the average dev get time to properly use git - no just get a shallow understanding, but use it at the complexity level needed for an large internal mono-repo ported to it.
I've used git Microsoft for years, but using git with Office client is totally different. I believe it's used differently, with very different expecations in Windows.
You’d be surprised at the amount of people at Microsoft that their entire career have been at Microsoft (pre-git-creation) that never used Git. Git is relatively new (2005) but source control systems are not.
That's still two decades. Git is so popular Microsoft bought one of the major forges 7 years ago.
To have never touched it in the last decade? You've got a gap in your CV.
11 replies →
It's entirely plausible that a long-term engineer at Microsoft wouldn't have have used git. I'm sure a considerable number of software engineers don't program as a hobby.
It only takes a week to learn enough git to get by, and only a month or two to become every-day use proficient. Especially if one is already familiar with perforce, or svn, or other VCS.
Yes, there is a transition, no it isn't really that hard.
Anyone who views lack of git experience as a gap in a CV is selecting for the wrong thing.
Sure you can use Source Depot (actually Perforce) at home: https://www.perforce.com/p/vcs/vc/free-version-control
I think Source Depot is a proprietary fork with a lot of Microsoft-stuff added in.
Its oddly fascinating that Microsoft has managed to survive for so long with ancient/bad tools for software engineering. Almost like “life finds a way” but for software dev. From the outside it seems like they are doing better now after embracing OSS/generic dev tools.
At one point source depot was Toincredibly advanced, and there are still features that it had that git doesn't. Directory mapping being a stand out feature! Being able to only pull down certain directories from a depot and also remap where they are locally, and even have the same file be in multiple places. Makes sharing dependencies across multiple projects really easy, and a lot of complicated tooling around "monorepos" wouldn't need to exist if git supported directory mapping.
(You can get 80% of the way there with symlinks but in my experience they eventually break in git when too many different platforms making commits)
Also at one point I maintained an obscenely advanced test tool at MS, it pounded through millions of test cases across a slew of CPU architectures, intermingling emulators and physical machines that were connected to dev boxes hosting test code over a network controlled USB switch. (See: https://meanderingthoughts.hashnode.dev/how-microsoft-tested... for more details!)
Microsoft had some of the first code coverage tools for C/C++, spun out of a project from Microsoft Research.
Their debuggers are still some of the best in the world. NodeJS debugging in 2025 is dog shit compared to C# debugging in 2005.
I never understood the value of directory mapping when we used Perforce. It only seemed to add complexity when one team checked out code in different hierarchies and then some builds worked, some didn’t. Git was wonderful for having a simple layout.
2 replies →
As always, git's answer to the problem is "stop being afraid of `git submodule`."
Cross-repo commits are not a problem as long as you understand "it only counts as truly committed if the child repo's commit is referenced from the parent repo".
2 replies →
Ok, but now tell me your real thoughts on sysgen. ;-)
> git supported directory mapping.
Is this a "git" failure or a "Linux filesystems suck" failure?
It seems like "Linux fileystems" are starting to creak under several directions (Nix needing binary patching, atomic desktops having poor deduplication, containers being unable to do smart things with home directories or too many overlays).
Would Linux simply sucking it up and adopting ZFS solve this or am I missing something?
4 replies →
Google used Perforce for years and I think Piper still has basically the same interface? So no, MSFT wasn’t ridiculously behind the times by using Source Depot for so long.
Let’s not forget that Microsoft developed a lot of tools in the first place, as in, they were one of the companies that created things that didn’t really exist before Microsoft created them.
Git isn’t even very old, it came out in 2005. Microsoft Office first came out in 1990. Of course Office wasn’t using git.
Office is a package including things like Word and Excel. Word itself came out in 1984 for the first Macintosh. Windows OS did not yet exist.
Some examples would be useful here. Not knocking MS tools in general but are there any that were industry fists? Source code control for example existed at least since SCCS which in turn predates Microsoft itself.
3 replies →