← Back to context

Comment by anonymsft318

6 hours ago

As a Microsoftie of more than a decade... Yeah, I see this.

We have an internal system called Cosmos[0] that does a great job of processing huge quantities of data very fast. And we sat on it for years while the rest of the industry moved to Spark and its derivatives. We finally released it as Azure Data Lake Analytics (ADLA) but did a shit job of supporting/promoting it.

We built Synapse, and it's garbage. We've now got Fabric which I guess is the new Synapse. I wouldn't really know because I probably have five different systems that I use that basically do large-scale data processing, and yet Fabric isn't one of them; who knows, maybe it will become the sixth?

We've had numerous internal systems for orchestrating jobs, and it wasn't until Azure Data Factory that we finally released something externally that we sort-of-kind-of-but-not-really use internally. (To be fair, some teams do use it internally, but we're not all rowing in the same direction.)

I regularly deal with multiple environments with different levels of isolation for security. I don't even know how it's all supposed to work -- I have my regular laptop and a secure workstation and three accounts that work on the two. Yet I have to do some privileged account escalation to activate these roles; when I'm done, there's no apparent way to end the activation early, so I just let it time out.

These things are but a fraction of the Azure offerings, but literally everything I have used in Azure makes me absolutely HATE working in the cloud. There's not a single bright side to it AFAICT. As best as I can tell, the only reason why Azure makes so much damn money is because Microsoft is huge and can leverage its size into growth. We're very much failing up here.

[0] https://www.microsoft.com/en-us/research/publication/big-dat...

> I probably have five different systems

This is the story of Microsoft - five different ways to do the thing, none of which do everything, and all of which are in various states of disrepair ranging from outright deprecation on up through feature-incomplete preview. Which one do you use? Who knows, but by the time you get everything moved over to that one and make allowances for all the stuff the one you chose doesn't support, there will be a new more logical choice for "that one" and you'll have to start over again. Wheee.

  • And now slap widespread vibe coding and PRs that reviewed by LLMs without anyone giving it a proper look.

    • We are now definitely doing a lot of that. My manager has been saying things like, "I don't even know how it works, but I used AI to build [thing], and I just sent it to a PR." He's very strong technically, but the mindset has absolutely shifted to, "move fast and break things, yoloooooo". It's frustrating to say the least.

Ugh this sounds like when I worked at Oracle/OCI. Some environments required a VPN, some a jumpbox, and some required logging into a virtual desktop, and then logging into a jumpbox. Just thinking about it gives me PTSD

  • any sufficiently large organization that is around for a decade or two trends towards spaghetti-access

    • Yup, same boat here (mid-size company).

      All the corporate stuff is behind Okta, so that easy enough.

      But all the dev/test systems are a mix of SSO, individual logins, etc. At least they're all behind the same VPN (except when they aren't, but that's less common).

      And of course, if you're a cloud engineer (vs "normal" software engineer), you also have to deal with AWS access, which is a whole different can of worms.

    • And yet, somehow AWS managed to get this right-ish. They evolved, learned by making mistakes, and created de-facto standards (like object storage protocol) on the way, while at the same time supporting decades-old services. And I'm sure they'll withstand the current AI craze.

      1 reply →

Their support team likes to sit on things for a while too. I'm on day 4 of waiting for Azure to approve my support request to increase Azure Batch vCPUs from default of 4 to 20 for ESv3 series. I signed up last week and converted to a paid account. I'm going to use Google Cloud Batch today instead.

  • You’ve made a fundamental mistake and you’ll have the same result from every cloud provider.

    You’re using a legacy v3 series that is being removed from the data centres in an era where you could be using v6 or newer instances that are being freshly deployed and are readily available.

    If you can’t be bothered to keep an eye on these absolute basics, you’re going to have a rough time with any public cloud, no matter their logo design.

    Right now you're paying more for less compute and having to deal with low availability too! Go read the docs and catch up to the last decade of virtual hardware changes.

    Or, just run this and pick a size:

        Get-AzBatchSupportedVMSku -Location 'centralus' | `
        ? Name -like 'Standard_E*v[67]'

Ah, I remember Cosmos and SCOPE from my time at MS ~15 years ago! It was actually pretty cool technology. So is it still around?