Comment by no_wizard
4 days ago
At what point is AWS worth using over other compute competitors when you’re using them as a storage bucket + VPS. They’re wholly more expensive at that point. Why not go with a more traditional but rock solid VPS provider?
I have the opposite philosophy for what it’s worth: if we are going to pay for AWS I want to use it correctly, but maximally. So for instance if I can offload N thing to Amazon and it’s appropriate to do so, it’s preferable. Step Functions, lambda, DynamoDB etc, over time, have come to supplant their alternatives and its overall more efficient and cost effective.
That said, I strongly believe developers don’t do enough consideration as to how to maximize vendor usage in an optimal way
Your management will frequently be strangely happier to waste money on AWS, unfortunately.
Truly a marketing success.
> That said, I strongly believe developers don’t do enough consideration as to how to maximize vendor usage in an optimal way
Because it's not straightforward. 1) You need to have general knowledge of AWS services and their strong and weak points to be able to choose the optimal one for the task, 2) you need to have good knowledge of the chosen service (like DynamoDB or Step Functions) to be able to use it optimally; being mediocre at it is often not enough, 3) local testing is often a challenge or plain impossible, you often have to do all testing on a dev account on AWS infra.
Most work isn’t greenfield.
AWS can be used in a different, cost effective, way.
It can be used as a middle-ground capable of serving the existing business, while building towards a cloud agnostic future.
The good AWS services (s3, ec2, acm, ssm, r53, RDS, metadata, IAM, and E/A/NLBs) are actually good, even if they are a concern in terms of tracking their billing changes.
If you architect with these primitives, you are not beholden to any cloud provider, and can cut over traffic to a non AWS provider as soon as you’re done with your work.
Of that list, watch out since IAM != IAM != IAM, so "cloud agnostic" is that famous 80/20 split
Some stuff is going to be provider-specific.
Let me explain why we’re not talking about an 80/20 split.
There’s no reason to treat something like a route53 record, or security group rule, in the same way that you treat the creation of IAM Policies/Roles and their associated attachments.
If you create a common interface for your engineers/auditors, using real primitives like the idea of a firewall rule, you’ve made it easy for everyone to avoid learning the idiosyncrasies of each deployment target, and feel empowered to write their own merge requests, or review the intended state of a given deployment target.
If you need to do something provider-specific, make a provider-specific module.
I agree that using them as a VPS provider is a mistake.
If you don't use the E(lasticity) of EC2, you're burning cash.
For prod workloads, if you can go from 1 to 10 instances during an average day, that's interesting. If you have 3 instances running 24/7/365, go somewhere else.
For dev workloads, being able to spin instances in a matter of seconds is a bliss. I installed the wrong version of a package on my instance? I just terminate it, wait for the auto-scaling group to pop a fresh new one a start again. No need to waste my time trying to clean my mess on the previous instance.
You speak about Step Functions as an efficient and cost effective service from AWS, and I must admit that it's one that I avoid as much as I can... Given the absolute mess that it is to setup/maintain, and that you completely lock yourself in AWS with this, I never pick it to do anything. I'd rather have a containerized workflow engine running on ECS, even though I miss on the few nice features that SF offers within AWS.
The approach I try to have is:
- business logic should be cloud agnostic
- infra should swallow all the provider's pills it needs to be as efficient as possible
>business logic should be cloud agnostic
In practice I found this to be more burden than it’s worth. I have yet to work somewhere that is on Azure, GCP or AWS and actually switch between clouds. I am sure it happens, but is it really that common?
I instead think of these platforms as a marriage, you’re going to settle in one and do your best to never divorce
Part of my job is to do migrations for customers, so, to me at least, it's not uncommon.
Using all the bells and whistles of a provider and being locked-in is one thing. But the other big issue is that, as service providers, they can (and some of them did more often than not) stop providing some services or changing them in a way that forces you to make big changes in your app to keep it running on this service.
Whereas, if you build your app in a agnostic way, they can stop or change what they want, you either don't rely on those services heavily enough for the changes required to be huge, or you can just deploy elsewhere, with another flavor of the same service.
Let's say you have a legacy Java app that works only with a library that is not maintained. If you don't want to bear the cost of rewriting with a new and maintained library, you can keep the app running, knowing the risks and taking the necessary steps to protect you against it.
Whereas if your app relies heavily on DynamoDB's API and they decide to drop the service completely, the only way to keep the app running is to rewrite everything for a similar service, or to find a service relying on the same API elsewhere.
Because the compartmentalization of business duties means that devs are fighting uphill against the wind to sign a deal with a new vendor for something. It's business bikeshedding, as soon as you open the door to a new vendor everyone, especially finance, has opinions and you might end up stuck with a vendor you didn't want. Or you can use the pre-approved money furnace and just ship.