Comment by dbdr
3 days ago
> I submitted several bug fixes and refactoring, notably using smart pointers, but they were rejected for fear of breaking something.
And that, my friends, is why you want a memory safe language with as many static guarantees as possible checked automatically by the compiler.
Language choices won't save you here. The problem is organizational paralysis. Someone sees that the platform is unstable. They demand something be done to improve stability. The next management layer above them demands they reduce the number of changes made to improve stability.
Usually this results in approvals to approve the approval to approve making the change. Everyone signed off on a tower of tax forms about the change, no way it can fail now! It failed? We need another layer of approvals before changes can be made!
Yeah I've seen that move pulled. Funnily enough by an ex-Microsoft manager.
Hence the rewrite-it-in-Rust initiative, presumably. Management were aware of this problem at some level but chose a questionable solution. I don't think rewriting everything in Rust is at all compatible with their feature timelines or severe shortages of systems programming talent.
In a rewrite you can smuggle in a quality lift
I had a memory management problem so I introduced GC/ref counting and now I have a non-deterministic memory management problem.
Ref counting is deterministic. Rust memory management is also deterministic: the memory is freed exactly when the owner of the data gets out of scope (and the borrow checker guarantees at compile time there is no use after that).
Cool now use the reference on another thread.
1 reply →
They could have started with simple Valgrind sessions before moving to Rust though. Massive number of agents means microservices, and microservices are suitable for profiling/testing like that.
Visual Studio has had quite some tooling similar to it, and you can have static analysis turned on all the time.
SAL also originated with XP SP2 issues.
Just like there have been toons of tools trying to fix C's flaws.
However the big issue with opt-in tooling is exactly it being optional, and apparently Microsoft doesn't enforce it internally as much as we thought .
> However the big issue with opt-in tooling is exactly it being optional,
That's true, and that's a problem.
> and apparently Microsoft doesn't enforce it internally as much as we thought .
but this, in my eyes, is a much bigger problem. It's baffling considering what Microsoft does as their core business. Operating systems high impact software.
> Visual Studio has had quite some tooling similar to it, and you can have static analysis turned on all the time.
Eclipse CDT, which is not capable as VS, but is not a toy and has the same capability: Always on static analysis + Valgrind integration. I used both without any reservation and this habit paid in dividends in every level of development.
I believe in learning the tool and craft more than the tools itself, because you can always hold something wrong. Learning the capabilities and limits of whatever you're using is a force multiplier, and considering how fierce competition is in the companies, leaving that kind of force multiplier on the table is unfathomable from my PoV.
Every tool has limits and flaws. Understanding them and being disciplined enough to check your own work is indispensable. Even if you're using something which prevents a class of footguns.
2 replies →
It’s org-dependent. On Windows, SAL and OACR are kings, plus any contraption MSR comes up with that they run on checked-in code and files bugs on you out of the blue :) Different standards.
I was waiting for that comment :) Remember that everybody, eventually, calls into code written in C.
If 90% of the code I run is in safe rust (including the part that's new and written by me, therefore most likely to introduce bugs) and 10% is in C or unsafe rust, are you saying that has no value?
Il meglio è l'inimico del bene. Le mieux est l'ennemi du bien. Perfect is the enemy of good.
That is an unexpected interpretation. Use the best tool for the job, also factoring what you (and your org) are comfortable with.
1 reply →
Depends on which OS we are talking about.
I know a few where that doesn't hold, including some still being paid for in 2026.
If you're sufficiently stubborn, it's certainly possible to call directly into code written in Verilog, held together with inscrutable Perl incantations.
High-level languages like C certainly have their place, but the space seems competitive these days. Who knows where the future will lead.
If you want something extra spicy, there are devices out there that implement CORBA in silicon (or at least FPGA), exposing a remote object accessible using CORBA
You didn’t miss the smiley, did you? :)
1 reply →
It’s worse than that. Eventually everybody calls into code that hits hardware. That is the level that the compiler (ironically?) can no longer make guarantees. Registers change outside the scope of the currently running program all the time. Reading a register can cause other registers on a chip to change. Random chips with access to a shared memory bus can modify the memory that the comipler deduced was static. There be dragons everywhere at the hardware layer and no compiler can ever reason correctly about all of them, because, guess what, rev2 of the hardware could swap a footprint compatible chip clone that has undocumented behavior that. So even if you gave all you board information to the compiler, the program could only be verifiably correct for one potential state of one potential hardware rev.
Sure, but eliminating bugs isn't a binary where you either eliminate all of them or it's a useless endeavor. There's a lot of value in eliminating a lot of bugs, even if it's not all of them, and I'd argue that empirically Rust does actually make it easier to avoid quite a large number of bugs that are often found in C code in spite of what you're saying.
To be clear, I'm not saying that I think it would necessarily be a good idea to try to rewrite an existing codebase that a team apparently doesn't trust they actually understand. There are a lot of other factors that would go into deciding to do a rewrite than just "would the new language be a better choice in a vaccuum", and I tend to be somewhat skeptical that rewriting something that's already widely being used will be possible in a way that doesn't end up risking breaking something for existing users. That's pretty different from "the language literally doesn't matter because you can't verify every possible bug on arbitrary hardware" though.
The hardware only understand addresses and offsets, aka pointers :)
All the more reason to have memory safety on top.
Did you miss the part that writes about the "all new code is written in Rust" order coming from the top? It also failed miserably.
That was quite interesting and now I will take another point of view of the stuff I shared previously.
However given how Windows team has been anti anything not C++, it is not surprising that it actually happened like that.
It came from the top of Azure and for Azure only. Specifically the mandate was for all new code that cannot use a GC i.e. no more new C or C++ specifically.
I think the CTO was very public about that at RustCon and other places where he spoke.
The examples he gave were contrived, though, mostly tiny bits of old GDI code rewritten in Rust as success stories to justify his mandate. Not convincing at all.
Azure node software can be written in Rust, C, or C++ it really does not matter.
What matters is who writes it as it should be seen as “OS-level” code requiring the same focus as actual OS code given the criticality, therefore should probably be made by the Core OS folks themselves.
3 replies →