Decisions that eroded trust in Azure – by a former Azure Core engineer

3 days ago (isolveproblems.substack.com)

I think this is especially problematic (from Part 4 at https://isolveproblems.substack.com/p/how-microsoft-vaporize...):

"The team had reached a point where it was too risky to make any code refactoring or engineering improvements. I submitted several bug fixes and refactoring, notably using smart pointers, but they were rejected for fear of breaking something."

Once you reach this stage, the only escape is to first cover everything with tests and then meticulously fix bugs, without shipping any new features. This can take a long time, and cannot happen without the full support from the management who do not fully understand the problem nor are incentivized to understand it.

  • This isn't incentivized in corporate environment.

    Noticed how "the talent left after the launch" is mentioned in the article? Same problem. You don't get rewarded for cleaning up mess (despite lip service from management) nor for maintaining the product after the launch. Only big launches matter.

    The other corporate problem is that it takes time before the cleanup produces measurable benefits and you may as well get reorged before this happens.

    • This is the root of the issue. For something like Azure, people are nor fungible. You need to retain them for decades, and carefully grow the team, training new members over a long period until they can take on serious responsibilities.

      But employees are rewarded for showing quick wins and changing jobs rapidly, and employers are rewarded for getting rid of high earners (i.e. senior, long-term employees).

      19 replies →

    • > You don't get rewarded for cleaning up mess (despite lip service from management) nor for maintaining the product after the launch

      I have never worked at a shop or on a codebase where "move fast & break things, then fix it later" ever got to the "fix it later" party. I've worked at large orgs with large old codebases where the % of effort needed for BAU / KTLO slowly climbs to 100%. Usually some combination of tech debt accumulation, staffing reduction, and scale/scope increases pushing the existing system to its limits.

      This is related to a worry I have about AI. I hear a lot of expectations that we're just going to increase code velocity 5x from people that have never maintained a product before.

      So moving faster & breaking more things (accumulating more tech debt) will probably have more rapid catastrophic outcomes for products in this new phase. Then we will have some sort of butlerian jihad or agile v2.

      2 replies →

    • Meanwhile, failure to clean up this particular mess was a key factor in losing a trillion dollars in market cap, according to the author.

    • It’s also a customer problem.

      In a product where a customer has to apply (or be aware of updates), it’s easier to excite them about new features instead of bug fixes.

      Especially for winning over new customers.

      If the changelog for a product’s last 5 releases are only bug fixes (or worse “refactoring” that isn’t externally visible), most will assume either development is dead or the product is horribly bug ridden - a bad look either way.

    • > This isn't incentivized in corporate environment.

      Course it is. But only by the winners who reward the employees who do the valuable work. Microsoft has all sorts of stupid reasons why they have lots of customers - all basically proxies for their customers' IT staff being used to administrating Microsoft-based systems - but if they mess up the core reasons to use a cloud enough they will fail.

    • Its a cool talent filter though, if you higher people the set of people that quit on doomed projects and how fast they quit is a real great indicator of technological evaluation skills.

    • You do but you then make a career out of it : you become the fixer ( and it can be a very good career , either technical or managerial)

  • No joke, I worked at a place where in our copy of system headers we had to #define near and far to nothing. That was because (despite not having supported any systems where this was applicable for more than a decade) there was a set of files that were considered too risky to make changes in that still had dos style near and far pointers that we had to compile for a more sane linear address space. https://www.geeksforgeeks.org/c/what-are-near-far-and-huge-p...

    Now, I'm just a simple country engineer, but a sane take on risk management probably doesn't prefer de facto editing files by hijacking keywords with template magic compared with, you know just making the actual change, reviewing it, and checking it in.

  • Once you reach this stage, the only escape is to jump ship. Either mentally or, ideally, truly.

    You're in an unwinnable position. Don't take the brunt for management's mistakes. Don't try to fix what you have no agency over.

    • unfortunately, what you will find is that unless you get lucky, the next ship is more of the same.

      The system/management style is ingrained in corporate culture of large-ish companies (i would say if it has more than 2 layers of management from you to someone owning the equity of the business and calling the shots, it's "large").

      It stems from the fact that when an executive is bestowed the responsibility of managing a company from the shareholders, the responsibility is diluted, and the agent-principle problem rears their ugly head. When several more layers of this starts growing in a large company, the divergence and the path of least resistance is to have zero trust in the "subordinates", lest they make a choice that is contrary to what their managers want.

      The only way to make good software is to have a small, nimble organization, where the craftsman (doing the work) makes the call, gets the rewards, and suffers the consequences (if any). That aligns the agent-principle together.

      4 replies →

  • I was once in such a position. I persuaded management to first cover the entire project with extensive test suite before touching anything. It took us around 3 months to have "good" coverage and then we started refactor of parts that were 100% covered. 5 months in the shareholders got impatient and demanded "results". We were not ready yet and in their mind we were doing nothing. No amount of explanation helped and they thought we are just adding superficial work ("the project worked before and we were shipping new features! Maybe you are just not skilled enough?") Eventually they decided to scrap whole thing. Project was killed and entire team sacked.

    • I’m a developer and if a team spent five months only refactoring with zero features added I would fire you too.

      Refactoring and quality improvements must happen incrementally and in parallel with shipping new features and fixing bugs.

      2 replies →

  • > first cover everything with tests

    Beware this goal. I'm dealing with the consequences of TDD taken way too far right now. Someone apparently had this same idea.

    > management who do not fully understand the problem nor are incentivized to understand it

    They are definitely incentivized to understand the problem. However the developers often take it upon themselves to deceive management. This happens to be their incentive. The longer they can hoodwink leadership, the longer they can pad their resume and otherwise play around in corporate Narnia.

    It's amazing how far you can bullshit leaders under the pretense of how proper and cultured things like TDD are. There are compelling metrics and it has a very number-go-up feel to it. It's really easy to pervert all other aspects of the design such that they serve at the altar of TDD.

    Integration testing is the only testing that matters to the customer. No one cares if your user service works flawlessly with fake everything being plugged into it. I've never seen it not come off like someone playing sim city or factorio with the codebase in the end.

    • Customers don’t care about your testing at all. They care that the product works.

      Like most things, the reality is that you need a balance. Integration tests are great for validating complex system interdependencies. They are terrible for testing code paths exhaustively. You need both integration and unit testing to properly evaluate the product. You also need monitoring, because your testing environment will never 100% match what your customers see. (If it does, you’re system is probably trivial, and you don’t need those integration tests anyway.)

      1 reply →

    • Unit tests are just as important as integration tests as long as they're tightly scoped to business logic and aren't written just to improve coverage. Anything can be done badly, especially if it is quantified and used as a metric of success (Goodhart's law applies).

      Integration tests can be just as bad in this regard. They can be flakey and take hours, give you a false sense of security and not even address the complexity of the business domain.

      I've seen people argue against unit tests because they force you to decompose your system into discrete pieces. I hope that's not the core concern here becuase a well decomposed system is easier to maintain and extend as well as write unit tests for.

      5 replies →

  • > Once you reach this stage, the only escape is to first cover everything with tests and then meticulously fix bugs

    The exact same approach is recommended in the book "Working effectively with legacy code" by Michael Feathers, with several techniques on how to do it. He describes legacy code as 'code with no tests'.

    • "Show me the incentives, and I will show you the outcomes" - Charlie Munger

      I once worked in a shop where we had high and inflexible test coverage requirements. Developers eventually figured out that you could run a bunch of random scenarios and then `assert true` in the finally clause of the exception handler. Eventually you'd be guaranteed to cover enough to get by that gate.

      Pushing back on that practice led to a management fight about feature velocity and externally publicized deadlines.

  • It is so hard to test those codebases too. A lot of the time there's IO and implicit state changes through the code. Even getting testing in place, let alone good testing, is often an incredibly difficult task. And no one will refactor the code to make testing easier because they're too afraid to break the code.

  • > I submitted several bug fixes and refactoring, notably using smart pointers, but they were rejected for fear of breaking something.

    And that, my friends, is why you want a memory safe language with as many static guarantees as possible checked automatically by the compiler.

    • Language choices won't save you here. The problem is organizational paralysis. Someone sees that the platform is unstable. They demand something be done to improve stability. The next management layer above them demands they reduce the number of changes made to improve stability.

      2 replies →

    • Hence the rewrite-it-in-Rust initiative, presumably. Management were aware of this problem at some level but chose a questionable solution. I don't think rewriting everything in Rust is at all compatible with their feature timelines or severe shortages of systems programming talent.

      1 reply →

    • They could have started with simple Valgrind sessions before moving to Rust though. Massive number of agents means microservices, and microservices are suitable for profiling/testing like that.

      5 replies →

  • Though this doesn't make much sense on its surface - a bug means something is already broken, and he tells of millions of crashes per month, so it was visibly broken. 100% chance of being broken (bug) > some chance of breakage from fixing it

    (sure, the value of current and potential bug isn't accounted for here, but then neither is it in "afraid to break something, do nothing")

    • I've experienced a nearly identical scenario where a large fleet of identical servers (Citrix session hosts) were crashing at a "rate" high enough that I had to "scale up" my crash dump collection scripts with automated analysis, distribution into about a hundred buckets, and then per-bucket statistical analysis of the variables. I had to compress, archive, and then simply throw away crash dumps because I had too many.

      It was pure insanity, the crashes were variously caused by things like network drivers so old and vulnerable that "drive by" network scans by malware would BSOD the servers. Alternatively, successful virus infections would BSOD the servers because the viruses were written for desktop editions of Windows and couldn't handle the differences in the server edition, so they'd just crash the system. On and on. It was a shambling zombie horde, not a server farm.

      I was made to jump through flaming hoops backwards to prove beyond a shadow of a doubt that every single individual critical Microsoft security patch a) definitely fixed one of the crash bugs and b) didn't break any apps.

      I did so! I demonstrated a 3x improvement in overall performance -- which by itself is staggering -- and that BSODs dropped by a factor of hundreds. I had pages written up on each and every patch, specifically calling out how they precisely matched a bucket of BSODs exactly. I tested the apps. I showed that some of them that were broken before suddenly started working. I did extensive UAT, etc.

      "No." was the firm answer from management.

      "Too dangerous! Something could break! You don't know what these patches could do!" etc, etc. The arguments were pure insanity, totally illogical, counter to all available evidence, and motived only by animal fear. These people had been burned before, and they're never touching the stove again, or even going into the kitchen.

      You cannot fix an organisation like this "from below" as an IC, or even a mid-level manager. CEOs would have a hard time turning a ship like this around. Heads would have to roll, all the way up to CIO, before anything could possibly be fixed.

      2 replies →

  • Once you reach this stage, honestly the only escape is real escape. Put your papers in and start looking for a job elsewhere, because when they go down, they will go down hard and drag you with them. It's not like you didn't try.

  • > Once you reach this stage, the only escape is to first cover everything with tests and then meticulously fix bugs, without shipping any new features.

    Isn't this where Oracle is with their DB? Wasn't HN complaining about that?

  • Or to simplify the product and rebuild.

    • “Rebuild” is also a four-letter word though at this stage too. The customer has a panel of knob-and-tube wiring and aluminum paper-wrapped wire in the house. They want a new hot tub. They don’t want some electrician telling them they need to completely rewire their house first at huge expense, such that they cannot afford the hot tub anymore. They’ll just throw the electrician out and get some kid in a pickup truck (“You’re Absolutely Right Handyman LLC”) to run a lamp cord to their new hot tub. Once the house burns to the ground, the new owners will wire their new construction correctly.

    • Exactly. But he’s right about management, first the problem must be acknowledged and that may make some people look bad.

  • writing tests and then meticulously fixing bugs does not increase shareholders' value.

    • Dave Cutler and his team are a clear counter-example. They famously shipped Windows NT with zero known bugs, which clearly brought enormous shareholder value.

      The problem, of course, is that this sort of thing doesn’t bring value next quarter.

  • once you reach the stage, the only escape is to give up on it. and move on.

    somethings are beyond your control and capabilities

  • if the service is so shitty, why are people paying so much fucking money for it?

    is microsoft committing an accounting fraud?

    • I worked at a startup that was using Azure. The reason was simple enough - it had been founded by finance people who were used to Excel, so Windows+Office was the non-negotiable first bit of IT they purchased. That created a sales channel Microsoft used to offer generous startup credits. The free money created a structural lack of discipline around spending. Once the startup credits ran out, the company became faced with a huge bill and difficulty motivating people to conserve funds.

      At the start I didn't have any strong opinion on what cloud provider to use. I did want to do IT the "old fashioned way" - rent a big ass bare metal or cloud VM, issue UNIX user accounts on it and let people do dev/test/ad hoc servers on that. Very easy to control spending that way, very easy to quickly see what's using the resources and impose limits, link programs to people, etc. I was overruled as obviously old fashioned and not getting with the cloud programme. They ended up bleeding a million dollars a month and the company wasn't even running a SaaS!

      I ended up with a very low opinion of Azure. Basic things like TCP connections between VMs would mysteriously hang. We got MS to investigate, they made a token effort and basically just admitted defeat. I raged that this was absurd as working TCP is table stakes for literally any datacenter since the 1980s, but - sad to say - at this time Azure's bad behavior was enabled by a widespread culture of CV farming in which "enterprise" devs were all obsessed with getting cloud tech onto their LinkedIn. Any time we hit bugs or stupidities in the way Azure worked I was told the problem was clearly with the software I'd written, which couldn't be "cloud native", as if it was it'd obviously work fine in Azure!

      With attitudes like that completely endemic outside of the tech sector, of course Microsoft learned not to prioritize quality.

      We did eventually diversify a bit. We needed to benchmark our server software reliably and that was impossible in Azure because it was so overloaded and full of noisy neighbours, so we rented bare metal servers in OVH to do that. It worked OK.

      4 replies →

    • Corporate inertia. Sibling comment uses the term "hostage situation" which I admit is pretty apt.

      Microsoft is an approved vendor in every large enterprise. That they have been approved for desktop productivity, Sharepoint, email and on-prem systems does not enter the picture. That would be too nuanced.

      Dealing with a Large Enterprise[tm] is an exercise in frustration. A particular client had to be deployed to Azure because their estimate was that getting a new cloud vendor approved for production deployments would be a gargantuan 18-to-24 month org-wide and politically fraught process.

      If you are a large corp and have to move workloads to the cloud (because let's be honest: maintaining your own data centres and hardware procurement pipelines is a serious drag) then you go with whatever vendor your organisation has approved. And if the only pre-approved vendor with a cloud offering is Microsoft, you use Azure.

    • Because Azure customers are companies that still, in 2026 only use Windows. Anyone else uses something else. Turns out, companies like that don't tend to have the best engineering teams. So moving an entire cloud infrastructure from Azure to say AWS, probably is either really expensive, really risky or too disruptive to do for the type of engineering team that Azure customers have. I would expect MS to bleed from this slowly for a long time until they actually fix it. I seriously doubt they ever will but stranger things have happened.

      2 replies →

    • I have worked at two retail companies where AWS was a no no. They didn't want to have anything depending on a competitor(Amazon). So they went the Azure route.

    • CFOs love it because Microsoft does bundle pricing with office. Plus they love to give large credits to bootstrap lock-in.

    • You’re assuming the alternatives don’t have just as many issues. There’s been exactly one “whistleblower” who is probably tiptoeing the line of a lawsuit. I wouldn’t assume just because there isn’t a similar disgruntled gcp or aws engineer doesn't mean they don't have similar ways.

    • Yeah it’s entirely business people and executives who make these decisions in most companies. Not the ones who use it or implement on it.

    • Depending on the space you work in, you have almost no choice at all. If you're building for government then you're going to use Microsoft, almost "end of story".

    • most the upper management of companies who use them have dont have the technical competence to see it. (eg: banks, supermarket chains, manufacturing companies)

      once they are in, no one likes to admit they made a mistake.

    • Because the alternatives are also in similar state.

      AWS or GCP are all pretty crap. You use any of them, any you'll hit just enough rough edges. The whole industry is just grinding out slop, quality is not important anywhere.

      I work with AWS on a daily basis, and I'm not really impressed. (Also nor did GCP impress me on the short encounter I had with it)

      2 replies →

I don't know if any of this is true, but as a user of Azure every day this would explain so much.

The Azure UI feels like a janky mess, barely being held together. The documentation is obviously entirely written by AI and is constantly out of date or wrong. They offer such a huge volume of services it's nearly impossible to figure out what service you actually want/need without consultants, and when you finally get the services up who knows if they actually work as advertised.

I'm honestly shocked anything manages to stay working at all.

  • I’ve created a bunch of fresh Azure accounts over the past few years and each time I’ve found myself sitting there dumbfounded anew at how garbage the experience is.

    There has been weird broken jank at just about every step of the process at one point or another. Like, I’m a serious person trying to set something up for a production workload, and multiple times along the way to just having a working account that I can log into with billing configured, I’ll get baffling error messages like [ServiceKeyDepartureException: Insufficient validation expectancy. Sfhtjitgfxswinbvgtt-33-664322888], and the whole thing will simply not work until several hours later. Who knows why!?

    I evaluated some Azure + Copilot Studio functionality for a project recently, which required more engagement with their whole 365 ecosystem than I’d had in a long time and it had many of the same problems but worse. Just unbelievably low quality software for the price and how popular it is. Every step of the way I hit some stupid issue. The people using this stuff are clearly not the people buying it.

    • I've joked that on some services, when you're clicking buttons, you're actually opening tickets that a human needs to action.

      That scenario is an example. You complete an action on a web page and nothing works. You make no further changes and hours later it works perfectly. Your human wasn't fast enough that day.

      13 replies →

  • I remember being impressed with the Azure docs... until I spend a week implementing something, only to have it completely fail when deployed to the test environment because GraphAPI did not work as documented. The beautiful docs were a complete lie.

    These days I don't even bother looking at the docs when doing stuff with Azure.

  • We migrated some services to AKS because the upper management thought it was a good deal to get so many credits, and now pods are randomly crashing and database nodes have random spikes in disk latency. What ran reliably on GCP became quite unpredictable.

    • Exact same story at my place. Upper management decided it's a good idea to build on Azure because Microsoft promised some benefits. Things that ran reliable on GCP now need active firefighting on Azure

    • Interesting! We're using AKS with huge success so far, but lately our Pods are unresponsive and we get 503 Gateway Timeouts that we really can't trace down. And don't get me started on Azure Blob Tables...

      8 replies →

    • Gcp is hard to beat on k8s stuff. Performance and stability is crazy good.

      But it's not aws are famous and costs money. Hence moving away seems like a good idea :)

  • The part about prioritizing "aggressive feature velocity" over "core fundamentals" is true.

    The push is as insane as push to AI.

    At the same time fundamental improvements like migrating to .net core, or reducing logs is actively deprioritised. If it were not for compliance, we would not have any core engineering improvement at all

    Honestly, I was not even aware of rust push, probably cause no one in my org could do rust. I am glad we did not move to AKS though

  • Oh my goodness, yes. And how often their role assumption does not work!

    I need privileges to do thing A, so I assume the role, and even though the role is shown as active, the buttons are still greyed out. Sometimes it works after 10 minutes and 7x F5, most often however I do a complete relogin with MFA in an incognito window. Not distracting at all, and even that does not work sometimes.

  • I have been a frustrated user as well. Their services seem to be held together by duct tape. For instance, an online endpoint creation failed after 90minutes with internal-error and no clue what the error is. Support tickets are routed overseas to consultants who dont have a clue - and their job is a daily email keeping the customer warm. All-in-all, and as OP says, its amazing that it is still hanging together. Some services work reliably but not all.

A business man at a prior employer sympathetic with my younger, naive "Microsoft sucks" attitude told me something I remember to this day:

Microsoft is not a software company, they have never been experts at software. They are experts at contracts. They lead because their business machine exceeds at understanding how to tick the boxes necessary to win contract bids. The people who make purchasing decisions at companies aren't technical and possibly don't even know a world outside Microsoft, Office, and Windows, after all.

This is how the sausage is made in the business world, and it changed how I perceived the tech industry. Good software (sadly) doesn't matter. Sales does.

This is why most of Norway currently runs on Azure, even though it is garbage, and even though every engineer I know who uses it says it is garbage. Because the people in the know don't get to make the decision.

  • I’d say, they are very good at making platforms and grab everyone lock-in. But they need a good platform first. Azure seems like the first platform that is kinda shitty from the beginning and did not improve much.

    MBASIC was good and filled a void so it got used widely from the beginning. The language is their first platform. Later the developer tools like the IDE, compilers, still pretty solid if you ask me.

    MS-DOS and Windows are their next platform. It started OK with DOS — because CP/M was not great either. But the stability of Windows sucked so they brought in David Cutler’s team to make NT. It definitely grabbed the home/office market but didn’t do well for the server market.

    X-BOX is their third platform, which started very well but we all know the story now.

    Azure is their fourth platform, started shitty and still not good. The other platforms have high vintage points but Azure may not have one.

    • Those are mostly end-user or hosting platforms you mention (and their problems), what really makes MS tick is the enterprise platforms.

      Windows networks, Active Directory,etc. Azure is the continuation of that, those who run AD oftne default to Azure (that offers among other things hosted or hybrid AD environments).

      1 reply →

  • That's true for Azure, where contracts are signed due to free credits given over Office and Windows usage.

    However, there is a reason why everyone uses Office and Windows. Office is the only suite that has the complete feature set (Ask any accountant to move to Google Sheets). Windows is the only system that can effectively run on any hardware (PnP) and have been that way for decades.

    This is due to superior software on the aspects that matter to customers

    • People use Windows because Office runs on Windows, and Windows ran in any shitty cheap beige box. This is the whole story since the 1990's.

      On hardware: it's because Windows has a stable kernel ABI and makes it very simple for hardware vendors to write proprietary drivers. Linux kind of forces everybody to upstream their device drivers, which is good and bad at the same time - DKMS is something relatively new.

      But yeah, the NT kernel is very nice, the problem with Windows is the userland.

    • Windows is the only system that can effectively run on any hardware

      ...as long as that hardware is Intel-based (and a select few ARM-based boards nowaways). And the reason that it runs on all that hardware is because of Microsoft's business contracts with hardware vendors, not because of their software quality -- that's immaterial, as Microsoft generally does not write the drivers.

      2 replies →

    • I use to think that too.

      But if you really look at it the "comfort zone" problem isn't too big of an issue in itself that a few training workshops and brief acclimatization periods for other tool suites can't solve. Making accountants move to Google Sheets is actually doable given enough incentive; there really isn't a lack of features in Sheets against Excel so much as there is a difference of implementation. In fact, for many purposes Sheets and GSuite could even be the "superior software" if only one bothers to make good use of it.

      The problem is more that companies hesitate to take the dive because they can't be sure any of the alternatives will stay stable in the long run. Google is infamous for abruptly shutting down applications and none of the other competitors have built enough of a repute yet to ensure longterm reliability.

      Microsoft has been (and continues) riding on its first-mover advantage as an unmovable establishment for decades. It has worked out till now, but who knows till when.

  • This is in many ways a smart way to understand the problem, but it doesn't mean that microsoft contracts mean you're stuck with bad software. There are several verticals where Microsoft and Azure actually were smart and chose a better software product to sell on their platform than what they had in house.

    One example is when they stopped trying to develop a inferior product to EMR and Dataproc, and essentially just outsourced the whole effort to a deal made between them and Databricks. Because of this I assume many enterprise azure customers have better running data solutions in that space than they wouldve had they gone with just AWS or GCP.

    On the other hand, having worked for Microsoft on an Azure team, there are plenty of areas that critically need a rewrite (for dozens of different reasons), and such a solution is never found (or they just release some different product and tell those with different needs to migrate to that), where they keep on building what can only really be described as hot-fixes to meet urgent customer demands that make it harder to eventually do said critical rewrite.

    • The Databricks thing was a ploy. They then pushed Azure Synapse Analytics and forced all internal teams to stop using Azure Databricks. Synapse was half baked and then they are now pushing Microsoft Fabric which is even less baked.

    • About a year ago the whole situation changed and Microsoft started to push everyone to their own Data Engineering solution (Fabric) that back then was really half-baked.

  • A overly reductionist argument. They described any commercial software company because in the end, you sell or you die. Microsoft has incredible software people and incredible software that coexists with the shitty software people and shitty software.

  • But that also means that if you as a user/customer can make choices based on technical merits, you'll have a significant advantage.

    • An advantage how? Maybe you'll have one or two more 9s of uptime than your competitors; does that actually move the needle on your business?

      5 replies →

    • Most customers don't really have the knowledge needed to make choices based on technical merits, and that's why the market works as it does. I'm willing to say 95% of people on HN have this knowledge and are therefore biased to assume others are the same way. It's classic XKCD 2501.

  • I think this is spot on. Everything at the R&D phase of a project indicates that an Azure service is going to work for the use case. I've been reading the docs and though 'wow this is perfect!'. Then you get to implementation and realize its a buggy mess that barely does what you wanted to do in the first place, with ton of caveats.

    Of course that realization comes when you are already at the point of no return, probably by design.

  • My lesson was when European companies followed US tech into offshoring, and how quality doesn't play any role as long as the software delivers, from business point of view.

    Especially relevant when shipping software isn't the product the company sells.

  • Finnish public sector is also heavy Azure user. Their common ethos is that modern cloud services(=azure) are in many respects more secure than on-premises data centers. In addition, they are cost-effective and reliable.

What are we reading here? These are extraordinary statements. Also with apparent credibility. They sound reasonable. Is this a whistleblower or an ex employee with a grudge? The appearance is the first. Is it? They’ve put their name to some clear and worrying statements.

> On January 7, 2025… I sent a more concise executive summary to the CEO. … When those communications produced no acknowledgment, I took the customary step of writing to the Board through the corporate secretary.

Why is that customary? I have not come across it, and though I have seen situations of some concern in the past, I previously had little experience with US corporate norms. What is normal here for such a level of concern?

More, why is this public not a court case for wrongful termination?

Is Azure really this unreliable? There are concrete numbers in this blog. For those who use Azure, does it match your external experience?

  • >Is Azure really this unreliable? There are concrete numbers in this blog. For those who use Azure, does it match your external experience?

    IME, yes.

    I'm currently working as an SRE supporting a large environment across AWS, Azure, and GCP. In terms of issues or incidents we deal with that are directly caused by cloud provider problems, I'd estimate that 80-90% come from Azure. And we're _really_ not doing anything that complicated in terms of cloud infrastructure; just VMs, load balancers, some blob storage, some k8s clusters.

    Stuff on Azure just breaks constantly, and when it does break it's very obvious that Azure:

    1. Does not know when they're having problems (it can take weeks/months for Azure to admit they had an outage that impacted us)

    2. Does not know why they had problems (RCAs we're given are basically just "something broke")

    3. Does not care that they had problems

    Everyone I work with who interacts with Azure at all absolutely loathes it.

    • But doesn’t this experience contradict what OP is saying in a way. If azure is always breaking wouldn’t that imply that changes like “adding smart pointers” are being introduced into the codebase?

      3 replies →

  • As a former MSFTy it does sound weird to me too. I didn’t see what Axels level was but a lot of people work for Microsoft and not many of them can expect to email the CEO and get a response. It seems a bit like a crash out, not the first I’ve seen levied at Azure, won’t be the last. They probably think it’s a mental health episode, if you’re an important CEO crazy people will email you all the time and the staff probably filter them out before they see it. Also this is a lot of internal gossip, I would be worried that airing this publicly would impinge on future career opportunities, even healthy orgs would appreciate some discretion.

    I’m sure everything he said is completely true, Azure is one of the few tech stacks I refuse to work with and the predominant reason I left.

    If you’ve joined an org and nothing works the reason is usually that the org is dysfunctional and there is often very little you can do about it, and you’re probably not the first person who’s tried and failed at it.

    • While Microsoft is hierarchical - but it did encourage reaching out in a "flat" manner internally.

      In my experience - a loooong time ago ago now - executive leadership would participate in high-level escalations/critsits for large/key customers on calls. I was just a lowly field-engineer - but over the course of nearly 4-years, was on calls about 5 times with some of the big-names from that era that everyone knows about... And they seemed to emit enough empathy with the specific customer situation to move things forward.

      However - being on the "other-side-of-the-fence" (i.e. external, consulting with Microsoft customers - some of them who even spend $1.5billion/year in M365/Azure licensing) and assisting clients with issues and remediations for the last 10-years, things are no longer the same. No amount of escalation gets further than occasionally reaching some level of the product team - and it can take 8-12 months before that even occurs. Troubleshooting and deep-engineering support skills for cloud customers are typically non-existent, and the assigned resources seem to just wait until the issue resolves itself...

    • Never worked at a FAANG, but from what I read from their cultures I don't think a letter to the CEO from a senior engineer would go entirely unnoticed there. CEO's might receive crazy letters, but hopefully not regularly from their senior engineering staff..

      3 replies →

    • I like how caring about fiduciary responsibility is a mental health episode or personality disorder to enough people in the comments. Simply being employed gives you a vested interest in keeping an operation above board and healthy. If you have a stock plan, you have equal rights to comment on issues as some low IQ private equity chief that does an end run to manipulate a company for their own benefit. The cattle psychology of most IT workers and mid level managers never ceases to amaze me.

      2 replies →

  • In my experience Azure is full of consistency issues and race conditions. It's enough of an issue that I was talking about new OpenAI models becoming available via Bedrock on AWS and how convenient that was since I wouldn't have to deal with Azure and my colleague in enterprise architecture went on an unprompted rant about these exact issues. It's not the first time something like this has happened and I've experienced these issues first hand, so yes. I'd say reliability is a critical issue for Azure and it hasn't gotten better each time I've gone back to check.

  • I recall seeing some pretty damning reports from a security pentester that was able to escape from a container on Azure and found the management controller for the service was years old with known critical unpatched vulnerabilities. Always been a bit sceptical of them since then

  • Large orgs make decisions that prioritize short-term metrics over long-term quality all the time and nobody tracks whether those tradeoffs actually paid off. The decision to ship fast and fix later sounds reasonable in a meeting setting until articles like this surface and the reality comes through clearly.

    • > sounds reasonable in a meeting setting until articles like this surface

      No. It sounds reasonable past that. Because shipping features will make shareholders happy while an article like this will change nothing.

      1 reply →

  • I am sort of confused how NDA and such agreements employees sign would allow for an employee to post such an article without being sued by Microsoft?

    • Wild guess, touching this with a 10-foot pole risks validating his claims. If they sue for breach of NDA, it means his claims are factually correct, and if they sue for libel and it goes to court, they may be forced to submit documents they don't want to.

  • What I meant is that it’s customary to write to the Board through the Secretary as opposed to write directly or through some other channel.

    • Thanks for the direct reply! I wasn’t aware it was ever customary to write to a board.

      But I do see you have very clear concerns.

      One thing I don’t fully follow is: how did it get from such a nicely designed system, built by Dave Cutler, to this — simply moving fast and building tech debt?

      2 replies →

  • > What are we reading here? These are extraordinary statements. Also with apparent credibility.

    I left Microsoft in 2014. Already back then I could see this sort of stuff starting to happen.

    The Office Org was mostly immune from it because they had a lot of lifers, people who had been working on the same code for decades and who thought through changes slowly.

    But even by 2014 there were problems hiring developers who knew C++, or who wanted to learn it. COM? No way. One one team we literally had to draw straws once to determine who was going to learn how to write native code for Windows.

    It wasn't even a talent thing, Windows development skills are a career dead end outside of Microsoft. They used to be a hot commodity, and Microsoft was able to hire the best of the best from industry. Now they have to train people up, and Microsoft doesn't offer any of the employment perks that they used to use to attract top talent (Seattle used to be a low CoL area, everyone had private offices, job stability).

    When I started at Microsoft in 2007, the interview bar included deep knowledge of how computers worked. It wasn't unusual to have meetings drop down to talking about assembly code. Your first day after orientation was a bunch of computer parts and you were told to "figure out how to setup your box".

    Antivirus wasn't mandatory. The logic was if you got a virus, they made a mistake hiring you and you deserved to be fired.

    When your average developer can go that deep on any topic, you can generally leave engineers well enough alone and get good software.

    • > But even by 2014 there were problems hiring developers who knew C++, or who wanted to learn it. COM? No way.

      It doesn't help that there are some teams that are hardcore in keeping things as they are and don't want any tooling that might improve COM development experience.

      To this day Microsoft is yet to have any COM related tooling for C++ as easy to use as C++ Builder does it.

      MFC, ATL, WRL, WIL,.... you name it.

      The only time it seemed they finally got it, with C++/CX, there was a group that managed to kill this product, replace it with C++/WinRT, with no tooling other than the command line IDL compiler, now also abandoned as they refocused into windows-rs.

      2 replies →

    • “One team we literally had to draw straws once to determine who was going to learn how to write native code for Windows.”

      Jesus, you have tons of people who are willing to do that, even now. Microsoft just don’t care to hire from non-target schools, or ordinary professionals and train them —- sure the reason is, people believe that you cannot improve mediocrity, which I don’t believe so.

      On a completely different page, most of the generals and advisors and high level bureaucrats of the first Emperor of the Han dynasty came from exactly one county — the county of Pei. But in peaceful time they are just “ordinary people”.

      3 replies →

  • Yeah I thought that was extreme. An engineer going to the board of any corporation let alone Microsoft is not normal or customary IME. That could explain why they got no response.

    • When you see significant risks to the org and its value, and they go completely unaddressed by management, the board is the final step before going to the public. It is the board’s duty to the public owners to make sure management isn’t driving the company into the ground.

      It would be interesting to see this raised in the next shareholders meeting as a question of whether the board and exec team are actually competent and doing their work.

      A man can dream anyway. When there is this much money on the line, sometimes people actually get held somewhat accountable.

    • It's a baffling flaw in human nature. The board should have cared about these issues, but in practice communications to and from the board are tightly controlled, and communications outside of those constraints are discarded.

      This occurs whether or not it makes sense. Machiavelli actually warns about the specifically: if someone else controls access to you and communication with you, they have real leverage over you.

    • “customary” referred to the path through the Secretary, as opposed to writing directly to members. Besides that, depending on the nature of the communication, if everything fails, you may need to be sure you talk to people who will unconditionally put the best interests of the company ahead of any other consideration. The Board is one such group. See what Boeing did with the report of the mechanic who saw flaws in the 737 MAX’s door plugs. Was that worthy of a letter to the CEO, then the Board if no reaction? Or just talk to your dismissive manager and let the planes crash? I made a judgment call, which I entirely own.

  • The CEO is accountable to the board. If they are derelict in their obligations to the company, that's where you need to raise a stink so they can fix it.

    • Well, yeah, that’s what a board does, but I think the issue is whether it is customary to go to the board directly in this situation. The answer is a resounding NO. Very odd, but cool idea and approach.

      5 replies →

    • Yeah but I can't conceive a world where a Board would care about technical complaints from an employee about engineering decisions several levels downstream of the CEO's executive domain.

  • Yes it is that unreliable. Even when given free credits, I would rather pay for the offerings from Amazon/Google.

  • Azure is when you have a different version of the same product/api in each region.

  • I notice the title mentions the author is a former employee but he never mentions the terms on which he left.

    • at the bottom of part 4 -

      > The org’s leadership responded with strong defensiveness and denial. Not long afterward, the organization terminated my employment.

The post is so dramatized and clearly written by someone with a grudge such that it really detracts from any point that is trying to be made, if there is any.

From another former Az eng now elsewhere still working on big systems, the post gets way way more boring when you realize that things like "Principle Group Manager" is just an M2 and Principal in general is L6 (maybe even L5) Google equivalent. Similarly Sev2 is hardly notable for anyone actually working on the foundational infra. There are certainly problems in Azure, but it's huge and rough edges are to be expected. It mostly marches on. IMO maturity is realizing this and working within the system to improve it rather than trying to lay out all the dirty laundry to an Internet audience that will undoubtedly lap it up and happily cry Microslop.

Last thing, the final part 6 comes off as really childish, risks to national security and sending letters to the board, really? Azure is still chugging along apparently despite everything being mentioned. People come in all the time crying that everything is broken and needs to be scrapped and rewritten but it's hardly ever true.

  • >risks to national security and sending letters to the board, really?

    Yes, really, and guess what the DoD did on Aug 29, 2025, exactly 234 days after I warned the CEO of potential risks?

    https://www.propublica.org/article/microsoft-china-defense-d...

    It wasn’t specifically about the escort sessions from any particular country, though, but about the list of underlying reasons why direct node access was necessary.

  • > People come in all the time crying that everything is broken and needs to be scrapped and rewritten but it's hardly ever true.

    Or… you’ve just normalised the deviation.

    One of the few reliable barometers of an organisation (or their products) is the wtf/day exclaimed by new hires.

    After about three or four weeks everyone adapts, learns what they can and can’t criticise without fallout, and settles into the mud to wallow with everyone else that has become accustomed to the filth.

    As an Azure user I can tell you that it’s blindingly obvious even from the outside that the engineering quality is rock bottom. Throwing features over the fence as fast as possible to catch up to AWS was clearly the only priority for over a decade and has resulted in a giant ball of mud that now they can’t change because published APIs and offered products must continue to have support for years. Those rushed decisions have painted Azure into a corner.

    You may puff your chest out, and even take legitimate pride in building the second largest public cloud in the world, but please don’t fool yourself that the quality of this edifice is anything other than rickety and falling apart at the seams.

    Remind me: can I use IPv6 safely yet? Does it still break Postgres in other networks? Can azcopy actually move files yet, like every other bulk copy tool ever made by man? Can I upgrade a VM in-place to a new SKU without deleting and recreating it to work around your internal Hyper-V cluster API limitations? Premium SSDv2 disks for boot disks… when? Etc…

    You may list excuses for these quality gaps, but these kinds of things just weren’t an issue anywhere else I’ve worked as far back as twenty years ago! Heck, I built a natively “all IPv6” VMware ESXi cluster over a decade ago!

    • > One of the few reliable barometers of an organisation (or their products) is the wtf/day exclaimed by new hires.

      Wellllll ... my observations after many cycles of this are:

      - wtfs/day exclaimed by people interacting with *a new codebase* are not indicative of anything. People first encountering the internals of any reasonably interesting system will always be baffled. In this context "wtf" might just mean "learning something new".

      - wtfs/day exclaimed by people learning about your *processes and workflows* are extremely important and should be taken extremely seriously. "wtf, did you know all your junior devs are sharing a single admin API token over email?" for example.

    • > One of the few reliable barometers of an organisation (or their products) is the wtf/day exclaimed by new hires.

      Eh, I don't think this is exactly as reliable as you'd expect.

      My previous job had a fairly straight forward code base but had fairly poor reliability for the few customers we had, and the WTF portions usually weren't the ones that caused downtime.

      On the other hand, I'm currently working on a legacy system with daily WTFs from pretty much everyone, with a greater degree of complexity in a number of places, and yet we get fewer bug reports and at least an order of magnitude if not two more daily users.

      With all of that said... I don't think I've used any of Microsoft's new software in years and thought to myself "this feels like it was well made."

      3 replies →

    • I mean, the org had already decreed everything needed to be rewritten in Rust according to the account.

  • > Last thing, the final part 6 comes off as really childish, risks to national security and sending letters to the board, really?

    That struck me too. Maybe i've never worked high enough in an org (im unclear how highly ranked the author of the piece is) but i've never been in an org where going over your boss's boss's boss's boss's head and writing a letter to the board was likely to go well.

    That said, i could easily believe that both Azure is an absolute mess and that the author of the piece was fired because of how he went about things.

  • AWS and Google Cloud are both huge and are significantly better in UX/DX. My only experience with Azure was that it barely worked, provided very little in the way of information about why it didn't. I only have negative impressions of Azure whereas at least GC and AWS I can say my experiences are mixed.

  • > From another former Az eng now elsewhere still working on big systems, the post gets way way more boring when you realize that things like "Principle Group Manager" is just an M2 and Principal in general is L6 (maybe even L5) Google equivalent. Similarly Sev2 is hardly notable for anyone actually working on the foundational infra.

    Before the days of title inflation across the industry, a a Principal at Microsoft was a rare thing. When I was there, the ratio was maybe 1 principal for every 30 developers. Principals were looked up to, had decades of experience, and knew their shit really well. They were the big guns you called in to fix things when the shit really hit the fan, or when no one else could figure out what was going on.

    • One of Microsoft's problems is their pay is significantly lower than FAANG and so you very very rarely see people with expertise in the same verticals jump to Azure. I get that "the deal" at Microsoft is lower pressure for lower pay but it really hinders the talent pipeline. There are some good home grown principals and seniors, but even then I think the people I worked with would have done well to jump around and get a stint at another cloud provider to see what it's like. Many of them started as new grads and their whole career was just at Azure.

      Meanwhile when I was at another company we would get a weekly new hire post with very high pedigree from other FAANGs. And with that we got a lot of industry leading ideas by osmosis that you don't see Azure getting.

      2 replies →

  • > risks to national security

    Microsoft is the go to solution for every government agency, FEDRAMP / CMMC environments, etc.

    > People come in all the time crying that everything is broken and needs to be scrapped and rewritten but it's hardly ever true.

    This I'm more sympathetic to. I really don't think his approach of "here's what a rewrite would look like" was ever going to work and it makes me think that there's another side to this story. Thinking that the solution is a full reset is not necessarily wrong but it's a bit of a red flag.

    • At no point during the reading I got sense that he's suggesting something radical. Where specifically is he pointing out rewrite?

      "The practical strategy I suggested was incremental improvement... This strategy goes a long way toward modernizing a running system with minimal disruption and offers gradual, consistent improvements. It uses small, reliable components that can be easily tested separately and solidified before integration into the main platform at scale." [1]

      [1] https://isolveproblems.substack.com/p/how-microsoft-vaporize...

      2 replies →

    • > Microsoft is the go to solution for every government agency, FEDRAMP / CMMC environments, etc.

      I've been involved with FEDRAMP initiatives in the past. That doesn't mean as much as you'd think. Some really atrocious systems have been FEDRAMP certified. Maybe when you go all the way to FEDRAMP High there could be some better guardrails; I doubt it.

      Microsoft has just been entrenched in the government, that's all. They have the necessary contacts and consultants to make it happen.

      > Thinking that the solution is a full reset is not necessarily wrong but it's a bit of a red flag.

      The author does mention rewriting subsystem by subsystem while keeping the functionality intact, adding a proper messaging layer, until the remaining systems are just a shell of what they once were. That sounds reasonable.

      3 replies →

  • I think he did kind of point at the lack of seniority in the org, so I'm not sure he was trying to exaggerate with the titles.

    I'm really struck that they have such Jr people in charge of key systems like that.

    • Juniors love to hack out new things and in the mean time they can take the blame if needed, fair trade, won’t you say?

  • I've worked at both Microsoft and Google in the past 6 years and the notion that msft "Principal" is equivalent to goog L5 is crazy.

    • Meaning Msft Principal is below L5? I got the same feedback from one of my friends who works at Google. She said quality of former MSFT engineers now working at Google was noticeably lower.

      3 replies →

  • > risks to national security …really?

    Really. Apparently the Secretary of War agrees with him.

    • In fairness the SECWAR is hardly a computing expert.

      But in this case the SECWAR has been properly advised. If anything it's astonishing that a program whereby China-based Microsoft engineers telling U.S.-based Microsoft engineers specific commands to type in ever made it off the proposal page inside Microsoft, accelerated time-to-market or not.

      It defeats the entire purpose of many of the NIST security controls that demand things like U.S.-cleared personnel for government networks, and Microsoft knew those were a thing because that was the whole point to the "digital escort" (a U.S. person who was supposed to vet the Chinese engineer's technical work despite apparently being not technical enough to have just done it themselves).

      Some ideas "sell themselves", ideas like these do the opposite.

      12 replies →

  • The problem is that what he writes is very plausible and explains a lot about why Azure is so unreliable and insecure. The author didn't mention the shameful way Microsoft leaked a Golden SAML key to Chinese hackers. This event absolutely was a threat to national security.

  • If your reaction is emblematic of the way people reacted to his points internally that does give more credibility to his side of the story IMHO

  • Yes it's easy to critique any large system or organisation, to then go over everyone's head and cry to the CEO and Board is snake like behaviour especially offering you self as the answer to fix it. OP will be marked as a troublemaker and bad team member.

    • Maybe. That would be a dent in the shiny culture of trust Microsoft is proud to run on, though.

  • Do you contest the fact that Microsoft royally fumbled OpenAI out of sheer incapability of providing what's supposed to be its core business despite having all deals in its favor? Because that's the most damning validation against Azure in recent times.

  • The grudge is simple and doesn't detract one thing from a very well articulated blog: you do you job as an engineer of pointing out problems, even proposing solutions, and they fire you for doing exactly the job. It's infuriating enough just from reading it, idk how you can't see any legitimacy on what the guy is complaining. You have your right of free speech to complain about shitty jobs if you want, there's no honor bound to maintain silence here.

  • He might sound like he has a grudge but you sound like you’re personally invested. Shill?

I've seen Azure OpenAI leak other customer's prompt responses to us under heavy load.

https://x.com/DaveManouchehri/status/2037001748489949388

Nobody seems to care.

  • This is insane, when you say azure OpenAI, do you mean like github copilot, microsoft copilot, hitting openai’s api, or some openai llm hosted on azure offering that you hit through azure? This is some real wild west crap!

  • If this is real, the scary part isn't that it happened. The scary part is Microsoft not acknowledging/publishing/warning that it happened. "We gave your data to other people" is one of those things you should really tell people.

  • That is absolutely insane.

    • Yeah, I saw over 100 leaked messages.

      Fun ones include people trying to get GPT to write malware.

        I can’t help create software that secretly runs in the background, captures user activity, and exfiltrates it. That would meaningfully facilitate malware/spyware behavior.
      
        If your goal is legitimate monitoring, security testing, or administration on systems you own and where users have given informed consent, I can help with safe alternatives, for example:
      
        - Build a visible Windows tray app that:
          - clearly indicates it is running
          - requires explicit opt-in
          - stores logs locally
          - uploads only to an approved internal server over TLS
        - Create an endpoint telemetry agent for:
          - process inventory
          - service health
          - crash reporting
          - device posture/compliance
        - Implement parental-control or employee-monitoring software with:
          - consent banners
          - audit logs
          - uninstall instructions
          - privacy controls and data retention settings
      
        I can also help with defensive or benign pieces individually, such as:
      
        - C# Windows Service or tray application structure
        - Secure HTTPS communication with certificate validation
        - Code signing and MSI installer creation
        - Local encrypted logging
        - Consent UI and settings screens
        - Safe process auditing using official Windows APIs
        - How to send authorized telemetry to your own server
      
        If you want, I can provide a safe template for a visible C# tray app that periodically sends approved system-health telemetry to your server

  • Should be a high severity incident if data isoation has failed anywhere. And that is for SaaS let alone cloud provider.

  • Did you anomomize those? Did Azure dox them or send the templated version?

    • Azure sent them to me like that.

      I only saw two companies mentioned in the messages I got back. I reached out to both to try to confirm, but never heard back.

It's a nice read. Thank you for sharing this.

> Microsoft, meanwhile, conducted major layoffs—approximately 15,000 roles across waves in May and July 2025 —most likely to compensate for the immediate losses to CoreWeave ahead of the next earnings calls.

This is what people should know when seeing massive layoffs due to AI.

  • I honestly thought this was one of the weaker points of the article.

    The OpenAI deal almost certainly related purely to GPU capacity, which had little to do with the article. The layoffs would have happened regardless.

    IMO - churn, and generalization is the root cause. Engineers are thrown on projects for a year with little prior experience, leave others to pickup the pieces, etc. There's no longer a sense of ownership, and I'm sure the recent wave of layoffs isn't helping with this.

"For fiscal 2025, Microsoft CEO Satya Nadella earned total pay of $96.5 million, up 22% from a year earlier." -CNBC.com

and

"I also see I have 2 instances of Outlook, and neither of those are working." -Artemis II astronaut

Some previous colleague of mine has to work with Azure on their day to day, and everything explained in this article makes a lot of sense when I get to hear about their massive rantings of the platform.

12 years ago I had to choose whether to specialize myself in AWS, GCP or Azure, and from my very brief foray with Azure I could see it was an absolute mess of broken, slow and click-ops methodology. This article confirms my suspicions at that time, and my colleague experience.

> The direct corollary is that any successful compromise of the host can give an attacker access to the complete memory of every VM running on that node. Keeping the host secure is therefore critical.

> In that context, hosting a web service that is directly reachable from any guest VM and running it on the secure host side created a significantly larger attack surface than I expected.

That is quite scary

  • It is kind of a fundamental risk of IMDS, the guest vms often need some metadata about themselves, the host has it. A hardened, network gapped service running host side is acceptable, possibly the best solution. I think the issue is if your IMDS is fat and vulnerable, which this article kind of alludes to.

    There’s also the fact that azure’s implementation doesn’t require auth so it’s very vulnerable to SSRF

    • You could imagine hosting the metadata service somewhere else. After all there is nothing a node knows about a VM that the fabric doesn’t. And things like certificates comes from somewhere anyway, they are not on the node so that service is just cache.

      5 replies →

  • This is well documented: https://learn.microsoft.com/en-us/azure/virtual-machines/ins...

    Why would an Azure customer need to query this service at all? I was not aware this service even exists- because I never needed anything like it. AFAI can tell, this service tells services running on the VM what SKU the VM is. But how is this useful to the service? Any Azure users could tell how they use IMDS? Thanks!

    • > Why would an Azure customer need to query this service at all? I was not aware this service even exists- because I never needed anything like it.

      The "metadata service" is hardly unique to Azure (both GCP & AWS have an equivalent), and it is what you would query to get API credentials to Azure (/GCP/AWS) service APIs. You can assign a service account² to the VM¹, and the code running there can just auto-obtain short-lived credentials, without you ever having to manage any sort of key material (i.e., there is no bearer token / secret access key / RSA key / etc. that you manage).

      I.e., easy, automatic access to whatever other Azure services the workload running on that VM requires.

      ¹and in the case of GCP, even to a Pod in GKE, and the metadata service is aware of that; for all I know AKS/EKS support this too

      ²I am using this term generically; each cloud provider calls service accounts something different.

    • Mainly for getting managed-identity access tokens for Azure APIs. In AWS you can call it to get temporary credentials for the EC2’s attached IAM role. In both cases - you use IMDS to get tokens/creds for identity/access management.

      Client libraries usually abstract away the need to call IMDS directly by calling it for you.

      3 replies →

    • I use GCP, but it also has the idea of a metadata server. When you use a Google Cloud library in your server code like PubSub or Firestore or GCS or BigQuery, it is automatically authenticated as the service account you assigned to that VM (or K8S deployment).

      This is because the metadata server provides an access token for the service account you assigned. Internally, those client libraries automatically retrieve the access token and therefore auth to those services.

    • There is a bunch of things a VM needs when first starting from a standard image. Think certificates and a few other things.

The personal account makes a lot of sense, although I could easily see why the OP was not successful. Even if you are an excellent engineer, making people do things, accept ideas, and in general hear you requires a completely different skill altogether - basically being a good communicator.

The second thing is that this series of blog posts (whether true or not, but still believable) provides a good introduction to vibe coders. These are people who have not written a single line of code themselves and have not worked on any system at scale, yet believe that coding is somehow magically "solved" due to LLMs.

Writing the actual code itself (fully or partially) maybe yes. But understanding the complexity of the system and working with organisational structures that support it is a completely different ball game.

  • I disagree.

    I've worked on honing my communication skills for 20 years in this industry. Every time I have failed to get the desired result, I have gone back to the drawing board to understand how I can change how I'm communicating to better convey meaning, urgency, and all that.

    After all that I've finally had an epiphany. They simply don't care. They don't care about quality, about efficiency, about security. They don't care about their users, their employees, they don't care about the long term health of the company. None of it. Engineers who do care will burn out trying to "do their job" in the face of management that doesn't care.

    It's getting worse in the tech industry. We've reached the stage where leaders are in it only for themselves. The company is just the vehicle. Calls for quality fall on deaf ears these days.

    • yes, so situational awareness is even more fundamental than communication

      especially because people hired by people hired by people (....) hired by founders (or delegated by some board that's voted by successful business people) did not get there by being engineering minded.

      and this is inconceivable for most engineering minded people!

      they don't care because their world, their life, their problems and their solutions are completely devoid of that mindset.

      some very convincing founder types try to imitate it, some dropouts who spent a few years around people who have this mindset can also imitate it for a while, but their for them it's just a thing like the government, history, or geography, it's just there, if there's a hill they just go around, they don't want to understand why it's there, what's there, what's under it, what geological processes formed it, why, how, how long it will be there ...

  • > Even if you are an excellent engineer, making people do things, accept ideas, and in general hear you requires a completely different skill altogether - basically being a good communicator.

    I was thinking like this for a while but, now, I think this expectation is majorly false for a senior individual contributor. Especially when someone who can push out a detailed series of blogposts and has tried step-wise escalation.

    Communication is a two-way street. Unlike the individual contributors, the management is responsible for listening and responding to risk assesments by the senior members and also ensuring that the technical competence and experienced people are retained in a tech company. If a leader doesn't want to keep an open ear, they do not belong there. If there is a huge attrition of highly senior people from non-finalized projects, you do not belong leadership either. Both cases are mentioned in the article.

    Unfortunately our socioeconomic and political culture in the West has increasingly removed responsibilities and liabilities from the leadership of the companies. This causes people with lackluster technical, communication and risk assesment mentality being promoted into leadership positions.

    So outside of a couple completely privately owned companies or exceptionally well organized NGOs, it will be increasingly difficult to find good leaders.

  • Even before vide coding this problem existed.

    The truth is, only small companies build good stuff. Once a company becomes big enough, the main product that it originally started on is the only good thing that is worth buying from them - all new ventures are bound to be shit, because you are never going to convince people to break out of status quo work patterns that work for the rest of the company.

    The only exception to this has been Google, which seems to isolate the individual sectors a lot more and let them have more autonomy, with less focus on revenue.

  • OP was not successful because they didn't want to fix the problems he discussed. I have been in the same exact situation, and no level of communication skills would have been successful in changing their minds.

    • Or they did, but they needed/wanted to do something else more.

      That's usually based on either (a) more perspective, or (b) lack of foundational depth.

      1 reply →

  • Absolutely textbook "Brilliant Jerk". Dude just whines and whines and whines. If you're so good, why can't you get anybody to work with you?

    • I did not get that impression at all. He mentioned quite a few conversations with partner level employees, technical fellow, principal managers.

      The impression I got is he tried to fix things, but the mess is so widespread and decision makers are so comfortable in this mess that nobody wants to stick their necks out and fix things. I got strong NASA Challenger vibes when reading this story…

      1 reply →

This reads pretty bad, and I believe it was. I worked on (and was at least partly responsible for) systems that do the same thing he described. It took constant force of will, fighting, escalation, etc to hold the line and maintain some basic level of stability and engineering practice.

And I've worked other places that had problems similar to the core problems described, not quite as severe, and not at the same scale, but bad enough to doom them (IMO) to a death loop they won't recover from.

I had the misfortune of having to use Azure back in 2018 and was appalled at the lack of quality, slowness. I was in GitHub forums, helping other customers suffering from lack of basic functionality, incredible prices with abysmal performance. This article explains a lot honestly.

Google’s Cloud feels like the best engineered one, though lack of proper human support is worrying there compared to AWS.

  • I thought that about GCP until I used it more seriously and kept running into issues where they didn’t have some feature AWS had had for ages, and our Google engineers kept saying the answer was to run your own service in Kubernetes rather than use a platform service which did not give me confidence that they understood what the business proposition was.

  • GCP's support is abysmal. Our assigned customer support agent has changed 3 times in as many months. it's really a dice roll if our quota increase requests are even acknowledged or we can get clarification on undocumented system limitations.

  • Unless you work in Alphabet's marketing department, then no GCP isn't the best one. The most reliable cloud has always been AWS by a wide margin. The exec in charge of GCP has had to apologize in public on multiple occasions for GCP's reliability problems. Sounds like they have fixed them by now (years later) but that doesn't make up the disaster that was BigQuery.

    Also, GCP is more focused on smaller customers so perhaps that's the part that works for you. AWS can be a bit daunting. But AWS actually versions their APIs and publishes roadmaps and timelines for when APIs get added and retired and what you should use instead. GCP will just cancel things on short notice with no replacement.

  • > Google’s Cloud feels like the best engineered one, though lack of proper human support is worrying there compared to AWS.

    Also the lack of locations in general. GCP's fleet is tiny compared to both AWS and Azure

Axel's engagement with the issue and refusal to give up is admirable. It also demonstrates that code and architecture remain important even in an era when managers believe these subjects can now be handled by LLMs. Imagine if LLMs were mandated for use in such an environment, further distancing SWEs from the code and overarching architectural choices. I am not saying that it can't work. But friction and maturity through experience really matters.

Also explains perfectly why I never met an engineer who was eager to run workloads on Azure. In orgs I worked, either the use of Azure was mandated by management (probably good $$ incentives) or through Microsoft leaning into the "Multi-Cloud for resilience" selling point, to get Orgs shift workloads from competitors.

Its also huge case for open (cloud) stack(s).

A tale as old as corporations. Corporate Ladders optimize for Ladder-Climbers, rather than Management Skills or Technical Skills.

Organization Design is tough. And gets even more challenging with size. Unfortunately, Org Design over time falls to those folks that rose up the ladder, rather than folks dedicated to understanding and designing orgs.

Switching from a Traditional org to an Agile one doesn't eliminate the need for thoughtful org design, it just changes the structures and incentives, and understanding and leveraging the interplay of various factors still requires unbiased organizational skills.

Mature companies will often send executives through training around organization design, but separating out the incentives that apply personally to the executives, from what they do for the company, can be challenging. So larger companies will tend to have a org design or operating model team, and very large companies will formalize this as CoE or Transformation Offices.

Still, getting that balance right can be tricky. Looks like MS failed badly in this instance. Maybe they learned from it, maybe they didn't. Judging by the way things are going with Win11, and the lack of response from the EVP, CEO and Board levels, maybe they ignored their internal folks that help with alignment, or more likely, simply laid them off!

Back in 2011 at Fujitsu, I ran one of the earliest Azure production subscriptions outside Microsoft. Windows Azure, mid-2011. I've watched this platform for 15 years from the outside.

Part 1 barely scratches the surface. Read parts 2 through 6.

The 173 agents story, the 200 manual node interventions per day, the WireServer sitting on the secure host side with unencrypted tenant memory mixed in shared address space, the letters to the EVP, the CEO, the Board - not a single acknowledgment.

The most damning thing in this series ... except for technical debt ... is the silence at the top when someone handed them the diagnosis on a plate.

Cutler's original vision was "no human touch." The gap between that and what Azure actually became is where the trillion dollars went.

Go read the rest. It's worth it.

Meanwhile on LinkedIn, there are still comments how adorable Microsoft leadership under Satya is... a carefully crafted PR image.

All those discussions about career suicide. Are you all that afraid to do what you think is right because you could get fired?

What Axel does by coming public with his named attached is remarkable. He gains a lot of respect in my book. Even if it is one sided and details are missing

  • The career suicide wasn't escalating (although probably that was job suicide). The career suicide is venting and airing all of your former employee's dirty laundry. Unless your former employer is doing something deeply unethical, writing hit pieces against them after you leave is going to make you less attractive to future employers. Before this article, employers would see "experienced and available cloud engineer." After this, employers would see "backstabber who was probably fired for being a pain."

    But also, this is Hacker News. Many of us work for companies that are largely making the world worse in exchange for large salaries. Many of us have, probably unconsciously, built our lives around not doing what we think is right in exchange for not getting fired.

What makes anyone start a new project and think “I know, I’ll use Azure!”? I really don’t get it. Do they have a great sales org? Is it because a phb thinks “well they made Office so it must be good”?

I interviewed with a Dutch energy company migrating infra from AWS -to- Azure and I have no idea what would make them do that (aside from inertia, but then why use Azure in the first place?)

And for some reason Azure usage is rampant in Europe.

  • A lot of enterprise orgs are completely helpless without Microsofts' identity solutions. That's what makes it easy to just adopt more and more Microsoft products.

  • In some places the purchasing decisions are not made by technical people. The infrastructure team gets azure budget and that's what they have to work with.

    At my work the sales people regularly come to us with some azure discount they got offered on linkedin or some event. Luckily I have the power to tell them to fuck off.

  • At one startup I was in, Azure sales proactively reached out to the CEO on LinkedIn and then we were urged to swap off to it.

  • > What makes anyone start a new project and think “I know, I’ll use Azure!”?

    Because your org is likely already paying for O365 and "Entra ID" or whatever they call it nowadays, and so it seems like this will all integrate nicely and give you a unified system with consistent identity management across all domains. It won't - omg, believe me it will NOT - but you don't find that out until it's too late.

  • At the startup I worked at in 2023, Azure was considered the only “safe” way to use OpenAI APIs in prod (eg agreements that the data couldn’t be used for training).

    Working with Azure was one of the worst parts of that job.

  • The one place I worked that used it - got a bunch of free credits for signing up - had some license agreement for some Microsoft service (Teams Oath App or something similar) where a certain percentage of the infra had to be hosted on Azure

    Don't remember the details of #2, just that they were a "Microsoft partner" of some sort which was beneficial to integrating with the Microsoft apps the product depended on and appearing as an app in the marketplace. The company built software that ingested IM/chat data from corporations (Teams and I think something older)

  • Where I live (New Zealand) Microsoft is a much larger percentage of IT infrastructure than say Bay Areas startups.

    Companies are already used to working with Microsoft. Building on Microsoft's cloud feels natural.

  • It's CYA. Nobody ever got fired for buying IBM, the old saying went. And it was true. Perhaps they should have, but they weren't. Nowadays, Oracle and MS have taken that position. They have the "share of mind," a PR concept that unfortunately succinctly expresses the problem. Someone proposes MS or Oracle, and everybody nods because they've heard about it. If that causes problems, other people will have to solve them anyway.

    • I have literally never met a competent person who takes MS or Oracle seriously.

      I confess, I'm a little salty. It's just insane how widespread Azure is when there's no obvious reason to prefer it. Of course, having the whole market be dominated by 3 giant American companies (even in Europe) is annoying in its own right.

  • Lot of SMBs run sql server and .net

    lift and shift into the cloud used to be the path of last resistance on Azure.

  • I work for a 300+B company that spends nearly $1b a year on AWS.

    Microsoft engaged in a relentless romance campaign with our loser EVP and one of his reports for months giving him the cool LinkedIn post opportunities that weak executives crave.

    Eventually he started pushing engineering to move to Azure.

    We have not yet (many bullets dodged so far) but it’s there and a periodic major time sink entirely due to manipulation and flattery.

    The entire “multicloud” push was a marketing effort by Microsoft to try and undermine exec faith in their “what? No, that’s a shit ton of work with zero return on investment” engineering teams.

I have no doubt Azure sucks, but almost all huge projects like that have systemic issues.

Axel sounds like a pretty smart guy, but wanted to point out I've seen this kind of behavior before, often from mid-level "job-hopping" engineers (sometimes with overly inflated egos) that overconfidently declare everything the organization is doing is BS and they have the magic solution to it.

And yes, sometimes by sending long winded emails to very large internal groups about how their solution will address all the problems if only someone recognize their genius (and eventually give them a VP title and budget). Some of the time, they are well intended but missing crucial historical knowledge about why things are in the state they are and why what they're proposing was tried 5 times before and failed.

Throwaway since I may want to work at Microsoft again one day.

Given my own experience at Azure I believe all of this. The post demonstrates there are serious management and structural issues throughout a large part, if not all, of the organization. And it definitely sheds some light on my experiences with the networking platform being so fragile and unreliable.

This post lends credence to the idea that large companies only care about security just enough to either not get compromised, or “just” to get mildly compromised. Defense in depth costs too much in management’s eyes, and they consider it a more wise use of resources to patch the holes after they’re made rather than prevent them in the first place.

Thanks to the author for sharing, and I hope your subsequent role is more enjoyable. It feels like the only way to make the structural changes being suggested is to climb the corporate ladder to accumulate sufficient power plus social and political capital, and then get buy in to painstakingly steer that behemoth of an organization in a safer and more sustainable direction.

Power Platform is of the same quality, I’d avoid it if possible.

I was a principal engineer in the Power Platform org and it always felt like a disorganized mess. Multiple reorganizations per year, changing priorities and service ownership.

  • These days, at work, I need to support applications build on Azure and Power Platform. Both are a hot mess. We get notifications that our APIM is down for at least 15min every weeks at random times. Power Platform is just a "preview" mess, things break and are not functional.

    I complained about it and basically was told to shut up, the industry is using them, so they must be right.

    No one is testing anything anymore.

I was a career Microsoft stack developer until Azure. Comparing it to AWS immediately forced me to make a decision to move away from their stack and towards AWS.

Just the networking and security infrastructure was complete trash compared to how those things worked in AWS.

Not one regret in my decision.

We run 1000s of machines in Azure. It's garbage. Very few features work. Nodes are always having strange issues, especially on the networking side. And the worst part is that Azure support has 0 interest in actually debugging things. We just got out of an outage today caused by the insanely slow SSDs that they attach to their postgres dbs by default.

I highly sympathize with the author and as a former user of Azure I agree it's a terrible mess.

However, the author has committed magnificent career suicide. If you are in a dysfunctional environment you don't go from issue to issue and escalate each one, proactively finding problematic issues.

You rather find the underlying issues (e.g. crashes not assigned) prioritize them and fix them.

By constantly whistle blowing on separate issues to as high as the board, he is not trying to improve by evolution but by revolution and in revolutions heads roll

  • The timeline and facts were quite different. Debating an org-wide quality issue on a 100+ member team's alias is not whistleblowing.

The only time I used Azure was for setting up Microsoft as a provider for authentication. Put me through a never-ending loop of asking for a Government of India issued document that was already submitted. Human support was non-existent. Decided never to use Azure in any product after that horrible experience.

If you cannot even get auth right I shudder to think what the rest of the product will be like to deal with should issues arise.

On a leadership level it seems problematic that they ghosted the feedback. Direcly this leads to people like Axel who feel ownership of the problem to break NDAs and create company harming posts. In my experience they at least respond with corp speak platitudes meaning that they got the feedback and don't understand it or ignore it, but have been taught to always ask for feedback and answer it (but incentives are to ask for feedback, then ignore it).

  • To be honest, I don’t think this is “company harming”—what would be harming is Azure being pwned if they didn’t know and did nothing, or failing SLA at the wrong time. Now they know.

    • The ultimate goal is to make customers spend money on Azure. Of course the information you published may make customers less likely to choose Azure, harming Microsoft.

      Being pwned can be explained away as an attacker having spent a lot of ressources to do so. Failing SLAs can be a calculated gamble.

      I myself am grateful you published this! It gives a great inside view on what is going on in big tech in general and Microsoft specifically.

      1 reply →

    • Azure has been repeatedly hacked very severely, and it doesn't seem to make much difference to their adoption.

from part 2:

> Worse, early prototypes already pulled in nearly a thousand third-party Rust crates, many of which were transitive dependencies and largely unvetted, posing potential supply-chain risks.

Rust really going for the node ecosystem's crown in package number bloat

  • Rust is nowhere close to Node in terms of package number bloat. Most Rust libraries are actually useful and nontrivial and the supply chain risk is not necessarily as high for the simple reason that many crates are split up into sub-crates.

    For example, instead of having one library like "hashlib" that handles all different kinds of hashing algorithms, the most "official" Rust libraries are broken up into one for sha1, one for sha2, one for sha3, one for md5, one for the generic interfaces shared by all of them, etc... but all maintained by the same organization: https://github.com/rustcrypto/

    Most crypto libraries do the same. Ripgrep split off aho-corastick and memchr, the regex crate has a separate pcre library, etc.

    Maybe that bumps the numbers up if you need more than one algorithm, but predominantly it is still anti-bloat and has a purpose...

    • While i agree the exact line “rust libraries are useful and non-trivial” i have heard from all over the place as if the value of a library is how complex it is. The rust community has an elitist bent to it or a minority is very vocal.

      Supply chain attacks are real for all package registries. The js ones had more todo with registry accounts getting hacked than the compromised libraries being bad or useless.

  • It really is about time that somebody do something about it.

    Start with tokio. Please vend one dependency battery included, and vendor in/internalize everything, thanks.

    • There is a difference between individual packages coming out of a single project (or even a single Cargo workspace) vs them coming out of completely different people.

      The former isn't a problem, it is actually desirable to have good granularity for projects. The latter is a huge liability and the actual supply chain risk.

      For example, Tokio project maintains another popular library called Prost for Protobufs. I don't think having those as two separate libraries with their own set of dependencies is a problem. As long as Tokio developers' expertise and testing culture go into Prost, it is not a big deal to have multiple packages. Similarly different components of the Tokio itself can be different crates, as long as they are built and tested together, them being separate dependencies is GOOD.

      Now to use Prost with a gRPC server, I need a different project: tonic which comes from a different vendor: Hyperium. This is an increased supply chain risk that we need to vet. They use Prost. They also use the "h2" crate. Now, I need to vet the code quality and the testing cultute of multiple different organizations.

      I have a firm belief that the actual People >>> code, tooling, companies and even licensing. If a project doesn't have (or retain) visionary and experienced developers who can instill good culture, it will ship shit code. So vetting organizations >> vetting indiviual libraries.

Personally, when asking others about their opinions on various cloud providers, AWS tends to emerge head and shoulders above the rest for one simple reason - AWS works.

And the reason AWS works is that AWS runs on AWS (in stark contrast to Azure and GCP which afaik is not what MS and Google use internally). And when AWS doesn't work, support is there to help you.

To add nuance to this statement, the other providers have their own strengths and standout features, but if you have to approach every single one of their features with suspicion that means you wont build stuff on top of them.

  • I've also noticed AWS tends to have less "magic" global services and tends to favor cell architecture with partitions and isolation.

    These super duper magic global services seem to be the cause of most outages since the blast radius is so huge.

    On the other hand, the proposition of a magic, infinitely scaleable service endpoint is nice from a developer perspective.

    • Even on AWS, if you go for the managed magic version of the thing, they'll make you pay more, lose some flexibilitym and the relinquished control will change things in a way that benefits AWS (slower scaling, limitations, unnecessary overprovisioning, overhead).

      An example - if you scale things manually by provisioning and starting EC2 instances via API - it will be more performant and cheaper than either Lambda or ECS Fargate (or Batch...). But those things at least work reliably.

      With the other two cloud providers, you'll likely run into a bug you cannot fix yourself, and you will have no support to help you.

I knew Microsoft was incredibly dysfunctional (you have to understand this if you're supporting their suite and want to succeed), but damn, I'm floored by the incompetence reported on from juniors to the Board and seemingly every step of leadership in between.

Yet I'm also not surprised, because I keep encountering it in non-Microsoft orgs. The current crop of leadership in general seems to be so myopically focused on GTM and share price bumps that even the mere suggestion of a problem is a career-ending move for whoever reported it (ask me how I know). Making matters worse is that Boards and shareholders have let them get away with this for so long, across every major org, that these folks believe in their heart and soul that they're absolutely, infallibly correct. The higher up someone is in an organization, the higher the likelihood they'll reject any and all feedback from "beneath" them that is contrary to their already-decided-upon agenda.

The kicker is that I'm not sure how to actually deal with this in a way that minimizes pain. In my subjective experience, these sorts of companies simply do not change until and unless there's literally no other option other than failure - and then, they're likely to choose failure for the parachute selection instead of doing the hard work of reform. Maybe what's needed is for Microsoft (or any of the legion of similarly dysfunctional enterprises out there) to genuinely fail in a non-recoverable way so as to shock the wider industry/economy into taking serious action on corporate misgovernance.

Maybe failure is the best option.

I don't know. I just know that this isn't tenable.

  • > Maybe what's needed is for Microsoft (or any of the legion of similarly dysfunctional enterprises out there) to genuinely fail in a non-recoverable way so as to shock the wider industry/economy into taking serious action on corporate misgovernance.

    The naive model of capitalism says that the benefit of market competition is that it's possible for failing companies to get out-competed by non-failing ones. In practice, there's enough of a combination of "natural monopoly", lock-in effects, and anti-competitive practices that the software landscape is covered in companies that are too big to avoid, let alone too big to fail.

    • That's what I've been trying to impart on folks for a decade, now. The lack of regulations has let apex predators capture the environment, and short of an environmental collapse (as in, the sudden and permanent destruction of compute in general that makes their business unrecoverable), the only solution is hunting the hunters - i.e., government regulations, monopoly breakups, market penalties, etc.

      There is no feasible way for someone to out-compete Microsoft, Apple, Google, or Oracle. None. They have to fail in some capacity to a significant, global-economy-harming degree to even provide an opening to competition in the marketplace. Even if AI turned out to be a huge nothingburger tomorrow, they'd still be unassailable.

      That is the problem.

This write-up is a shining example of why I’ve been rebuilding my business slowly away from Microsoft technology. Entra as IdP is one of the last projects. I’m probably not going to escape Exchange Online, but I’m going to be happy to finally federate the tenant to our internally managed IdP.

My spouse’s employer mandated that everyone move off AWS “because they’re a competitor” (they’re absolutely not), and Microsoft was happy to roll out discounts for Azure.

To say that has gone poorly would be generous. Azure is impressive in its own right, but it’s not comparable to AWS. (Which has its own problems, to be clear.)

The stagnation in Azure is apparent everywhere you look. The capacity issues have only gotten worse. There are still change advisory callouts in the Azure Portal with dates in the year 2020.

My most memorable anecdote from working in Azure is that they had two products named Purview and the internal MS people I talked to never figured out which one I was trying to use.

I was always very curious why people are using azure. Clunky difficult to setup and crazy prices. I know a person being very happy with them because of the credits they gave it to him. I felt I probably don't have a model that explains what is going on there and that would be cool to know why people pay them vs the competion

  • In my experience Azure endpoint versus openAI endpoint was way faster and significantly cheaper.

This is pretty damning, if half of it is true. I don’t work at Microsoft and I don’t have the knowledge to judge the reliability of Azure, but I do have friends who work as users of Azure and the words are not kind, especially the new Fabric database which is said to be crazy to pick for production at this stage — while MSFT switched the certification to Fabric already, pushing its customers to use it.

I’ll never work in a company that uses Azure as its main cloud services, just for the sake of quiet nights.

I do wonder what does it look like inside AWS and GCP, though. Is it the same level of chaos, but just because they started early they got more success? If that’s the case, maybe we can conclude, that very large cloud operation is not sustainable under the current company structure — because either the technical knowledge required is too dense, but companies won’t be able to retain workers, or because companies are forced to join the horn of the marketeers, eventually.

  • This article is like a cockroach in a restaurant dining room. Azure has one, GCP/AWS does not.

    • The thing with cockroaches is that if even a single one is seen in the dining room and someone calls environmental health, regardless of the restaurant's prestige, they close it with immediate effect until they get their act together and a food sanitation inspection clears them.

      At the end, everyone feels better, in particular the customers.

> Few engineers could reliably build the software locally; debugger usage was rare (I ended up writing the team's first how-to guide in 2024); and automated test coverage sat below 40%.

A key clue and explains why so much of what Microsoft puts out is garbage. Wow.

Title: How Microsoft Vaporized a Trillion Dollars

  • As an investor, this is exact how I feel. Everything was skyrocketing until OpenAI “diversified” mid-2025. The company’s market value has dropped more than 1 trillion since late October 1025, so the title is factual. You can rightfully argue and be skeptical about the link I make, but not about the numbers :)

Well, part 3 at least explains something I've observed; the platform is incredibly unstable. The same calls, with the same parameters, will often randomly fail with HTTP 400 errors, only to succeed later(hopefully without involving support). That made provisioning with terraform a nightmare.

I won't even dive too much into all the braindead decisions. Mixing SKUs often isn't allowed if some components are 'premium' and others are not, and not everything is compatible with all instances. In AWS, if I have any EBS volume I can attach it to any instance, even if it is not optimal. There's no faffing about "premium SKUs". You won't lose internet connectivity because you attached a private load balancer to an instance. Etc...

At my company, I've told folks that are trying to estimate projects on Azure to take whatever time they spent on AWS or GCP and multiply by 5, and that's the Azure estimate. A POC may take a similar amount of time as any other cloud, but not all of the Azure footguns will show themselves until you scale up.

I see that it's fashionable to bash everything MS related in HN, but let's not pretend that the other major cloud providers don't have their own problems (e.g. https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f77... or https://blog.barrack.ai/google-gemini-api-key-vulnerability/). We have had a couple of critical services hosted on Azure over ten years already, call me lucky, but we haven't had any major incidents. That said, the AI Foundry side is broken garbage at the moment, but so is also AI stuff from other providers.

  • Their VMs and load balancers mostly work. Their managed services are a crapshoot. We routinely "self hosted" at the company that used Azure to ensure some semblance of stability.

    For instance, our Patroni clusters were much more performant and stable than Azure Single Server Postgres (what a terrible product...).

    • Maybe this is why they retired Single Server PostgreSQL and are now offering only the new Azure Database for PostgreSQL (flexible server). Zero problem with the latter for us so far.

The "no one at Microsoft, not a single soul, could articulate why up to 173 agents were needed to manage an Azure node" really stuck with me. You have to wonder how many other parts of the code lack ownership and are in there just because no one knows what will happen if you take them out.

  • This reminds me of discussions of the “MinWin” initiative back in the Windows 7 timeframe, and how the obstacle was that nobody actually knew what you could take out of Windows and still have it work, so they had to be conservative.

you have a current US president who has never read the Art of War.

likely most company leadership, besides Hedge fund managers, have never read the Art of War either.

this results in management that lacks a strategic focus - they want to win the next battle (down in a valley, while giving the enemy an upper hand to be on the hill).

your infantry (low-level ICs) are smart and capable - and the org is actively pursing means to deskill them via some shit called (A.I) - your colonels (mid-management) are comfortable in their laurels since anyone who raises a voice is shown the door (hell most of them manage people now & don't fight anymore)

then you wonder why the country, the org is losing. but hey at least we posted a massive valuation.

  • Why read the Art of War when you wrote (or had ghost-written for you) the Art of the Deal?

The problem started because Azure was initially designed and released in a huge rush because Microsoft was so far behind AWS and needed something better.

I am reminded of the research finding that every human-designed complex system that works well started with a simple system that did did just one thing well, and new functions were added one at a time, with each one perfected before moving on to another. Which is the exact opposite of what happened here.

This read was a blast from the past. I'm not going to comment on much from OP and instead give a little of my experience there.

Straight out of college in 2017 I joined the Compute Fabric Controller (FC) org as a SWE on an absolutely wonderful team that dealt with mostly container management, VM and Host fault handling & repair policies, and Fabric to Host communication with most of our code in the FC. I drove our team's efforts on the never ending "Unhealthy" node workstream, the final catch-all bucket in the Host fault handler mentioned in OP. I also did heavy work in optimizing repair policies, reactive diagnostics for improved repairs and offline analysis, OS and HW telemetry ingestion from the Host like SEL events into the repair manager in real time, wrote the core repair manager state machine in the new AZ level service that we decoupled from the Fabric, drove Kernel Soft Reboot (KSR)/Tardigrade as a repair action for minimal VM impact for some host repairs, and helped stand up into eventually owning a new warm path RCA attribution service to help drive the root underlying causes of reliability issues and feed some offline analysis back into the live repair manager.

The work was difficult but also really really interesting. For example, Balancing repair policies around reliability is tricky. There's a constant fight in repair policies in grey situations between minimizing total VM downtime vs any VM interruptions/reboots/heals at all, because the repair controller doesn't have perfect information. If telemetry is pointing to VMs being degraded or down on the host, yet in reality they're not, we are the ones inducing the VM downtime by performing an impactful repair. If we wait a little while before taking an impactful repair action, it may be a transient issue that will resolve itself in the moment, at which point we can do much less impactful repairs after like Live Migration if the host is healthy enough. On the flipside, if some telemetry is saying the VMs are up yet they're down in reality and we just don't know it yet, taking time to collect diagnostics and then take a repair action(s) leads to only more overall total downtime.

When I joined in 2017 our team was 7 or 8 people including myself, yet had enough work for at least double that amount of people. On-call was a nightmare the first 2 years. Building Azure back then was like trying to build a car while already sitting behind the steering wheel of that car as it was already barreling down the highway. Everyone on my immediate team the first couple years were a joy to work with, highly competent, hard working, and all of us working absurd hours. For me 60hrs/wk was avg, with many weeks ~80 and a few weeks ~100. Other than the hours though, it was a splendid team environment and I'd like to think we had good engineering culture within our team, though maybe I'm biased. Engineering culture and quality did however vary substantially between orgs and teams. We were heavily under resourced and always needed more headcount, as did nearly every other team in Azure Compute. That never changed during my tenure even though my team's size ballooned to ~20 by 2020, and eventually big enough to where we had to split the team. There was high turnover from the lack of headcount and overwork which was somewhat alleviated by lowering the hiring bar... which obviously opened up another can of worms. This resourcing issue might explain, in part, why Azure is the way it is. We were always playing catchup as a result of the woes of chronic understaffing for years. I eventually burnt out which turned into spiraling mental health, physical health issues, constant panic attacks, and then a full blown mental health crisis after which I took LOA and eventually left the company. I came back briefly for a bit during LOA, and learned that the RCA service I'd built with the help of a coworker (who also left Azure) and was only a small part of our overall workload, had turned into a full fledged team of 9 people dedicated to working on that service in my absence. I know that stating some of this might affect my employment in the future but I don't really care. I know I'm not alone in experiencing burnout working in Azure. It wasn't my manager's fault either, he was amazing. He'd often ask and I would incorrectly yet confidently reassure him that I wasn't burning out but I simply didn't notice the signs. Things are better now though and I'm just happy to be here.

Kudos to the many brilliant people I worked alongside there, I hope you're all doing great.

  • > There was high turnover from the lack of headcount and overwork which was somewhat alleviated by lowering the hiring bar...

    Seen this game played before, at AWS working on the control plane for outposts. The correct solution here is dedicated operations staff to coordinate with the team and let the developers fast track issues that are resulting in high call volumes, not lowering the hiring bar for the entire team. The problem you run into with high call volumes and small teams is that it disrupts most developers enough that they can't build solutions and deal with the maintenance burden at the same time. You bleed talent because it places way more stress than necessary on the team.

  • 2 years of 60+ hours weeks is not good engineering culture, or any kind of culture.

    • Particularly when simultaneously "We are currently in YC Startup School as Geddy at geddy.io and we plan to launch soon."

  • The first and most important lesson, that I try to each every young developer starting in the industry: Go home after clocking in your hours negotiated in your contract. Drop your pen. Go home. Sleep well.

    And I hope, that every sensible senior developer in here does the same. Lead by example. Maybe it would prevent a few burnouts in this industry.

    And if you are a manager, then send your people home after they have clocked in their negotiated hours. For their own well-being. It’s your responsibility. And if it’s not working, then force them to go home.

    I hope you are better by now and got through the tough time. All the best for you!

    • The important nuance: you need to start going home at the correct time on day one. You can't start doing that when you feel overwhelmed already as the expectations have been solidified already.

      The corollary is that you also need to show up on time and put in honest effort during.

  • >There was high turnover

    This is a huge knowledge drain. You're constantly spending time getting new engineers up to speed and it takes years to relearn all the nuances the last person knew.

    You're in a constant cycle of re-learning the hard way instead of proactively applying experience.

Using Azure has severely affected my mental health over the last year. Reading these comments has been therapeutic.

I’ve been working with Azure and Azure Germanyfor the past years and have a strong history with AWS.

I cannot count how many times disks were not attaching during AKS rescheduling. We build polling where we polled Entra Id for minutes until it became “eventually” consistent - not trusting a service principal until it was fetched at least one minute consistently. The slowness of Azure Functions was unbearable. On Azure germany IoT Hubs had to be “rebooted” by support constantly - which was a shocking statement in itself. The docs always lying or leaving out critical parts. The whole Premium vs Standard stuff is like selling windows licenses. The role model and UI is absolutely inconsistent.

The stability, consistency of IAM, and speed of AWS in comparison makes me truly wonder how anyone stays with Azure. One reason might be that the Windows instances are significantly cheaper though..

I have been in a Microsoft adjacent company (meaning lots of people bounced to and from Microsoft to it) and all this makes a lot of sense. The almost ideological “everything in house” and politically oriented philosophy they had fits like a glove. Some of the ex Microsoft people hated it, some of them missed it. But the picture they made was pretty bleak.

Given how windows is going what’s described in the article doesn’t seem so shocking either. Even though they need not be correlated products, I can’t help but seeing a similar shortsightedness in the playbooks they are adopting.

> Furthermore, I contributed to brainstorming the early Overlake cards in 2020-2021, drafting a proposal for a Host OS <-> Accelerator Card communication protocol and network stack, when all we had was a debugger’s serial connection. I also served as a Core OS specialist, helping Azure Core engineers diagnose deep OS issues.

What exactly are these "Overlake accelerator cards"? What are they accelerating?

Having now read the six parts, I assume the same management issues, and junior devs all over the place, are the reason why Windows development has become a mess, and Project Reunion went nowhere sane, leaving only Windows employees to care about WinUI 3 and WinAppSDK.

If only we had a return of netbooks, meaning OEMs finally embracing GNU/Linux on consumer stores, instead of being left to technically minded aware of online stores.

Azure Functions have been solid for us. No real weird downtimes and if something happens its usually because we did something wrong.

We don't do very complicated things, mainly App Services with Azure SQL and Azure Functions.

Having said that, Microsoft did botch the .NET 8 -> .NET 10 migration for Azure Functions with Consumption Plan. So yeah ... we're beginning to see some of the cracks.

We signed up to go all-in on Azure because our CEO got an xbox to take home to his kids.

> That entire 122-strong org was knee-deep in impossible ruminations involving porting Windows to Linux to support their existing VM management agents.

> My day-one problem was therefore not to ramp up on new technology, but rather to convince an entire org, up to my skip-skip-level, that they were on a death march.

> I later researched this further and found that no one at Microsoft, not a single soul, could articulate why up to 173 agents were needed to manage an Azure node

This is most corporates. I'm sure this was celebrated as as a successful project and congratulations to everyone, along with big bonuses, RSU, raises, and promotions, mostly to other orgs to bring this kind of 'success' to other projects (or other companies). These people mostly are gone in less than 2 years. They continue to take 'wins'.

The VPs are dumb as shit, but they need 'successful' projects that have fancy names that they can present to their exec team.

The 173 agents are to give wins to a large number of people and teams, all these people contributed to this successful project.

If it continues, there will be a lessons learned powerpoint, followed by 10x growth in headcount, promotions to everyone and double down. 270 people can deliver a baby in 1 day and all that.

  • In part 2

    > This group was now tasked with moving their inherited stack to the new Azure Boost accelerator environment, an effort Microsoft had publicly implied was well underway at Ignite conferences since 2023.

    The goal is to attach your projects to something announced by the CEO and ride the career rocketship!

  • Can almost guarantee there were 173 agents because there were 173 silo'd teams with competing goals and priorities working on their own codebases in isolation.

    And no, a 174th team doesn't solve it. Communication and collaboration across teams is key

My comment history here is full of complaints about MS Teams, the chat app. It suffers from the "re-use every existing MS tech" problem. Building it on top of SharePoint I'm pretty sure resulted in it's top problems over the years (some fixed): - search sucked - can't scroll back to old messages - couldn't do private channels - the very concept of 1:1 teams to SharePoint site, resulting in a millions teams when all you really wanted was a channel - can't rename teams or channels - couldn't do private channels

I'm sure many more I didn't catch. These are all observations from outside, I've never worked at MS

I just do not understand how Azure has the scale it does. You only need to login and click around for a bit to see this is not a coherent system designed by competent people. Let alone try and actually build something on it.

Who are the customers? Who is buying this shit?

  • From my old experience in IT - people just default to Microsoft for everything. They don't want to hassle with learning anything else and assume better the devil you know. Glad I'm out of that world but its wild what people will put up with.

  • Microsoft shops. Lots of C# devs gravitate to it naturally. I’m glad I abandoned the MS stack over a decade ago.

    • .NET Core runs just as well on ECS though. And C# tooling is rock solid in VS Code on Mac. No need to touch Azure or Windows.

  • People and organizations that built things on top of Microsoft tech. Especially with a long history going back to NT times.

    HN, YC, startup environment or academia is a Unix bubble. They all feed into each other. Especially because Linux is gratis which helped all of those to deploy projects/products/papers cheaply. Unix systems traditionally lack much of the upper layers, so it is the responsibility of the company, persons, developers to deal with the OS minutea. You need sysadmins, devops, SREs. Those are common roles again in this Unix bubble. The dependency chains here are usually flatter since it keeps mid-term costs lower.

    Other organizations like governments and bigger orgs like banks prioritize having somebody else liable (i.e. they can blame) and they prefer to not hire technical competence in their orgs but rely on other companies. This is where Microsoft gets a lot of clients. You buy a bunch of server licenses. Your Microsoft support person installs them and installs IIS via GUI. And then you just upload your code every now and then. The OS updates, IIS server etc are all the responsibility of Microsoft and the middlemen companies. Minimal competence from the orginal org is required. There are multiple middlemen businesses who all give zero fucks about anything but whatever the immediate downstream from them. This is more usual in already publicly traded huge businesses. Moreover the investors actually mandate certain things that only this kind of layers of irresponsibility can deliver :) So you see this kind of switch happening towards IPOs.

    Azure is the cloud labeling and forcing the first paradigm over the second paradigm for Microsoft products. It got lots of support because shareholders liked it. I don't think the original NT design and Microsoft's business model was bad, it actually worked very well. However, shareholders gonna shareholder. So they pushed hard for Microsoft and their clients to move to the "cloud". Microsoft executives saw the huge profit and share value potential of pushing Azure the brand too. It was the AI of 2010s afterall.

  • > You only need to login and click around for a bit to see this is not a coherent system designed by competent people

    Ironically, the book "Hit Refresh" hit a nerve that every azure web-page has a refresh button. Isnt that dating back to web 1.0 ?

  • If you put me in front of AWS I'd have the same reaction. Or GCP for that matter, where I did have your reaction.

    It's familiarity and knowing how the beast operates. I know how to read the docs and understand the licensing.

    Any one piece of software could be a pile of shit with a terrible UX, but you're going to find those who are so familiar with it that everything else looks alien.

  • Google and especially Amazon/AWS compete with a lot of large companies which drive them towards Oracle, IBM and Microsoft as escape hatches.

    For instance, Walmart doesn't want to pay their largest competitor.

  • Because for some it works. At least I haven't heard the stories I see here yet at my workplace. Also I use some Azure, but apart from some weird UI bugs never had real big issues.

  • If you are a Microsoft shop then most likely you are on Azure. Your CFO would love the costs saved.

  • No idea but I think it's in half or more of the job ads I see in the Netherlands. I don't get it.

Does this mean AWS's #1 position is safe?

Every big cloud provider has its share of UX/stability/customer support issues.

At this point, it feels less like AWS is the 'least bad' option because alternatives are even worse.

"Risk aversion preventing fixes" is the most accurate part. I've seen this at other large companies too. You have a known bug, you know exactly how to fix it, but nobody will approve the change because "what if it breaks something else." So the bug stays forever and everyone just works around it. The irony is that the workarounds eventually cause more breakage than the fix ever would have.

If, like me, you started reading and after a while started thinking “Wait, how many parts is this going to be in‽”, the answer is “six”.

The "too risky to deploy" problem is really a visibility problem. When you can't quickly see what's actually changing in a deploy, fear becomes the default. The teams that break out of this aren't the ones who stop shipping, they're the ones who build better signals before the deploy so engineers can ship with confidence instead of just hoping nothing breaks.

I read all 6 posts. OP is genuine and very talented. Unfortunate how foundational issues fall through the cracks in orgs.

> was maintaining in-memory caches containing unencrypted tenant data, all mixed in the same memory areas, in violation of all hostile multi-tenancy security guidelines

Splitting caches to different isolated memory areas will not make shareholders happy, will not lead to promotion and will not even move the project forward.

Simply put, designing secure software is detrimental in that environment.

I’m not an expert and surprised by the extent of Azure’s technical debt and its consequences. What would be a “minimal” reproducible configuration or setup of services that shows those technical deficiencies in the clearest way? A “benchmark for cloud computing services”, for a lack of a better description.

I have always wanted to find some technical refutes, and I found one on reddit.

https://www.reddit.com/r/programming/comments/1sbir8j/commen...

I'll skip the other comments and focus on the technical ones:

> there are hundreds of “agents” which run on a one time basis to install systems as part of deployment architecture. These agents often amount to pretty simple scripts or programs. They most often run one time per update deployment, or if nodes are repaved. Some install small daemons. It’s called micro service architecture. Guy claims to be some cloud wiz but doesn’t get these basics.

> That said He’s put cutlers original work on a pedestal, when fabric controller should have been replaced a decade ago. The monolithic nature of fabric has been a huge issue for reliability and scalability, and the company is trying its hardest to move as many features out of it into microservices as it can.

I'm wondering if OP can answer this refute? Looks like the person is working in a neighbor team. No offense intended but I'm really curious about the technical part.

Reading through this reminded me of just how engulfed in acronyms and lingo MS engineers must be. Much like AWS engineers with an acronym for every service that gets thrown around with the assumption of understanding, I felt like I needed a dictionary of those just to understand what was going on.

As far as I know, you still cannot rename a resource. Insanity.

I don't even work with it that much and have a laundry list of complaints about the weird little edge cases or funky pieces of documentation required to make things work.

Great series of articles and completely believable. My first thought after reading is I hope the author doesn’t get sued for violating his non-disclosure agreement.

Thanks for that, now I have a rock-solid argument when people say "oh we're already Microsoft customers, we'll just use Azure, it's easier, and they have Active Directory!!"

Some of this reads like parody, for example: "Cutler’s intent was to produce a system with the same level of quality, unshakable reliability, and attention to detail he was famous for in his work on VMS and NT."

I'm not really here to take shots at Dave Cutler, but Windows NT was not known for it's unshakeable reliability. If it's known for anything, it is known for lacking any basic security measures. I remember demonstrating to people who joined my WiFi network that I could automatically obtain remote shells on their laptop.

  • > I'm not really here to take shots at Dave Cutler, but Windows NT was not known for it's unshakeable reliability.

    NT itself (the kernel and native mode APIs) is pretty well designed and implemented, in my opinion. I know there were findings from fuzzing kernel and native mode APIs in early versions of NT, but by about the Windows 2000-era it was pretty solid.

    Win32 and the grown-up mess of APIs around it I'm less enthused with. NT itself is very impressive to me.

    My fever-dream OS is an NT kernel with a modern and updated Interix subsystem as the main subsystem, with Win32 as a compatibility layer.

I tried to use Azure once (more than 5 years ago), and the signing up kept crashing on me for hours. Never used it again since then. Some things are obvious.

Microsoft Azure has always been a clown show. I've found so many obvious bugs. The quality is not there and never will be. No serious companies rely on it. Use virtually any other vendor or host it yourself.

For some reason, MS is still doing well. I’m not sure what conclusions I should draw from that, other than big businesses are hard to kill?

This makes it extra silly to trust that Github won't train on your private repos, if they haven't already - just by accident

I've worked in Windows for many many years, no idea who this guy is. He is randomly name dropping. He wants attention.

At this point, it’s very clear that people nowadays choose Rust mostly to be part of the cult rather than clearly understanding its shortcomings and advantages over languages such as C, C++. It has gotten to the point that some devs after watching a YouTube video criticizing C++ for two hours, announce C++ the worst programming language. Unfortunately, such people become decision makers at giant tech companies too.

I've said it before and I'll say it again. I'm glad rust has good package management I really am. However given that aspect, it ends up forming a dependency heavy culture. In situations like this it's hard to use dependencies due to the amount of transitive dependencies some items pull in. I really which this would change. Of course this is a social problem so I don't expect a good answer to come of this....

> Cutler’s intent was to produce a system with the same level of quality, unshakable reliability, and attention to detail he was famous for in his work on VMS and NT.

I'm not sure whether this is serious or irony.

  • Search VMS stability, I think the consensus is clear.

    Then Google VMS longest uptime, and the record is 28 years. VMS often achieved five nines over 10 years (99.999%) so no irony.

    He took a bunch of folks with him from DEC to Microsoft to make NT, and of course his principles.

    Nowadays NT is bomb-proof believe it or not.

    Most of the crashes are in device drivers and some rare times in the UI code (Win32k) that should not be there, but the kernel itself is solid.

    (Yes I am a big fan)

    • I remember reading "Showstoppers" and David was quoted to say "If you break the build I'm the lawn mower and your ass is grass". Do you think such attitude is mandatory for good kernel level code?

      (I actually think it does and argued with people on HN, although I never wrote any professional kernel code myself)

      2 replies →

This smells of someone's Clawd writing something deceptively, much like the other semi-viral content that landed on reddit related to DoorDash systems.

Is it just me or does this describe most of Microsoft software at the moment? I tried to sign into my personal microsoft account to setup an oauth flow and I was greated by an infinity repeating error dialog about some internal service that had failed.

At work, I use outlook. The number of times I've gotten caught in an auth loop where I enter again and again my creds + tfa only for the screen to flicker and start all over again.

Complete garbage.

i run fastapi APIs on linode with cloudflare in front and honestly the simplicity is underrated. predictable billing, docs that match reality, no surprise platform regressions. for a straightforward API workload the hyperscaler tax doesn't make sense unless you genuinely need their scale

"The company formalized the idea that defects could be fixed through human intervention on live production systems" (From Part 5).

Uh...yeah. I think we all realized that years ago.

  • Great but then you tie your growth to the support people headcount. Normally you would see enormous costs upfront for R&D and bringing the thing up, then marginal costs when adding capacity (the hardware, mostly)—if capacity is proportional to the number of humans looking after the system, you will soon hit a limit, and the cost won’t look good either.

What an epic takedown.

Microsoft should have promoted this guy instead of laying him off.

Did Microsoft really lose OpenAI as a customer?

  • The answer to your question is in the public releases. MS went from primary partner (under ROFR) to one of the options. They retain IP rights and API hosting, although in recent weeks we learned that OpenAI was planning a workaround with AWS and Microsoft said they might sue them for that. So the happy marriage is over, it’s more like a custody battle now: https://www.reuters.com/technology/microsoft-weighs-legal-ac...

til: there’s individuals/people that "trusted" azure at all

I only used that shit platform because some Microsoft consultant convinced idiotic C-suite that Azure was the future.

"The company formalized the idea that defects could be fixed through human intervention on live production systems"

Uh...yeah. I think we all realized that years ago.

A former Azure Core engineer’s 6-part account of the technical and leadership decisions that eroded trust in Azure.

  • What's your assessment of AWS and GCP? Do you think it's likely they suffer from some of the same issues (eg the manual access of what should be highly secure, private systems, the instability, the lack of security)?

    • As a former GCP engineer, no, the systems are not generally unstable or insecure.

      There is definitely manual access of data - it requires what was termed “break glass” similar to the JIT mechanism described by the author. However, it wasn’t quite so loose; there were eventually a lot of restrictions on who could approve what, what access you got after approval, and how that was audited.

      It was difficult to get into the highest sensitivity data; humans reviewed your request and would reject it without a clear reason. And you could be 100% sure humans would review your session afterwards to look for bad behavior.

      I once had to compile a large list of IP addresses that accessed a particular piece of data to fulfill a court order. It took me days of effort to get and maintain the elevated access necessary to do this.

      I have a lot of respect for GCP as an engineering artifact, but a significantly less rosy opinion of GCP as an organization and bureaucratic entity. The amount of wasted effort expended on engaging with and navigating the bureaucracy is truly mind-boggling, and is the reason why a tiny feature that took a day to code could take months to release.

      1 reply →

  • Why do you speak about yourself in the third person?

    Also, after this:

    https://news.ycombinator.com/item?id=20341022

    You continued to work at Microsoft and now there is this takedown?

    I'm no friend of MS (to put it very mildly) but it seems to me your story is a bit inconsistent as well as the 7 year break between postings.

    • The comment comes from the input field on the post form. Not clear it would show up as a comment. The old thread you refer to had little to do with Microsoft per se. Let me known if I can help with the inconsistencies you mention?

    • > Why do you speak about yourself in the third person?

      When you submit a link to HN, there is an entry field for text in addition to the url.

      It does not really describe what the text is used for. For links, the content of that field is simply added as the first comment.

      Someone who is unfamiliar with the submission process may assume this field should describe what they are submitting, and not format it like a comment.

      Then that text gets posted as the first comment and tons of people downvote it, jumping to the conclusion that the weird summary comment is from an AI, and not the submitter describing their own submission.

      (I also assumed these comments were AI until someone else pointed this out)

      2 replies →

  • I downvoted this comment for sounding like a summarizing LLM, not adding anything substantial beyond the title of the post, before realizing you were the poster and author.

Now you can share this with anyone who says, “AI will make software faulty/buggy/unstable/garbage/whatever” - it won’t; people are perfectly capable of handling it themselves. /s

The first couple of paragraphs felt like a parody of a guy who goes to a diner and gets upset the waitress doesn’t address him as Dr.

It didn’t get any better.

When things must be shipped quickly, shit breaks and corners are cut; large orgs are full of disfunction. Not sure if such insight was worth of setting your own career on fire.

Any complex system - and these cloud systems must be immensely complex - accumulate cruft and bloat and bugs until the entire thing starts to look like an old hotel that hasn’t been renovated in 30 years.

  • It’s not inevitable. Absolutely this is true without significant effort, but if you’ve been around the traps for long enough (in enough organisations), you get to see that the level of quality can vary widely. Avoiding the mud-pit does require a whole org commitment, starting from senior leadership.

    This story is more interesting, in my opinion, in how quickly things devolved and also how unwilling the more senior layers of the org were to address it. At a whole company level, the rot really sets in when you start to lose the key people that built and know the system. That seems to be what’s happening here, and it does not bode well for MS in the medium term.

This reads like it was written by the Cleverest Person in the Room. I have to use Azure Devops at work, and some of the critique of Azure rings true for me, but the author-centric presentation was quite off-putting.