Comment by general1726
5 days ago
I think there is more of us who kind of degenerated from doing it the AWS way - API Gateway, serverless lambdas mess around with IAM roles until it works, ... - to - Give me EC2 / LightSail VPS instance maybe an S3 bucket let's set domain through Route53 and go away with the rest of your orchestrion AWS.
At what point is AWS worth using over other compute competitors when you’re using them as a storage bucket + VPS. They’re wholly more expensive at that point. Why not go with a more traditional but rock solid VPS provider?
I have the opposite philosophy for what it’s worth: if we are going to pay for AWS I want to use it correctly, but maximally. So for instance if I can offload N thing to Amazon and it’s appropriate to do so, it’s preferable. Step Functions, lambda, DynamoDB etc, over time, have come to supplant their alternatives and its overall more efficient and cost effective.
That said, I strongly believe developers don’t do enough consideration as to how to maximize vendor usage in an optimal way
Your management will frequently be strangely happier to waste money on AWS, unfortunately.
Truly a marketing success.
> That said, I strongly believe developers don’t do enough consideration as to how to maximize vendor usage in an optimal way
Because it's not straightforward. 1) You need to have general knowledge of AWS services and their strong and weak points to be able to choose the optimal one for the task, 2) you need to have good knowledge of the chosen service (like DynamoDB or Step Functions) to be able to use it optimally; being mediocre at it is often not enough, 3) local testing is often a challenge or plain impossible, you often have to do all testing on a dev account on AWS infra.
Most work isn’t greenfield.
AWS can be used in a different, cost effective, way.
It can be used as a middle-ground capable of serving the existing business, while building towards a cloud agnostic future.
The good AWS services (s3, ec2, acm, ssm, r53, RDS, metadata, IAM, and E/A/NLBs) are actually good, even if they are a concern in terms of tracking their billing changes.
If you architect with these primitives, you are not beholden to any cloud provider, and can cut over traffic to a non AWS provider as soon as you’re done with your work.
Of that list, watch out since IAM != IAM != IAM, so "cloud agnostic" is that famous 80/20 split
1 reply →
I agree that using them as a VPS provider is a mistake.
If you don't use the E(lasticity) of EC2, you're burning cash.
For prod workloads, if you can go from 1 to 10 instances during an average day, that's interesting. If you have 3 instances running 24/7/365, go somewhere else.
For dev workloads, being able to spin instances in a matter of seconds is a bliss. I installed the wrong version of a package on my instance? I just terminate it, wait for the auto-scaling group to pop a fresh new one a start again. No need to waste my time trying to clean my mess on the previous instance.
You speak about Step Functions as an efficient and cost effective service from AWS, and I must admit that it's one that I avoid as much as I can... Given the absolute mess that it is to setup/maintain, and that you completely lock yourself in AWS with this, I never pick it to do anything. I'd rather have a containerized workflow engine running on ECS, even though I miss on the few nice features that SF offers within AWS.
The approach I try to have is:
- business logic should be cloud agnostic
- infra should swallow all the provider's pills it needs to be as efficient as possible
>business logic should be cloud agnostic
In practice I found this to be more burden than it’s worth. I have yet to work somewhere that is on Azure, GCP or AWS and actually switch between clouds. I am sure it happens, but is it really that common?
I instead think of these platforms as a marriage, you’re going to settle in one and do your best to never divorce
1 reply →
Because the compartmentalization of business duties means that devs are fighting uphill against the wind to sign a deal with a new vendor for something. It's business bikeshedding, as soon as you open the door to a new vendor everyone, especially finance, has opinions and you might end up stuck with a vendor you didn't want. Or you can use the pre-approved money furnace and just ship.
There are entire industries that have largely de-volved their clouds primarily for footprint flexibility (not all AWS services are in all regions) and billing consistency.
Honestly just having to manage IAM is such a time-suck that the way I've explained it to people is that we've traded the time we used to spend administering systems for time spent just managing permissions, and IAM is so obtuse that it comes out as a net loss.
There's a sweet spot somewhere in between raw VPSes and insanely detailed least-privilege serverless setups that I'm trying to revert to. Fargate isn't unmanageable as a candidate, not sure it's The One yet but I'm going to try moving more workloads to it to find out.
Usually I write some IaC to automate this tedium so I only have to go through the IAM setup pain once. Now if requirements change, that's an entirely different story...
So the problem when you combine IAC with CI/CD is that the role assumed by the CI agent needs privileges to deploy things, so you need a bootstrap config to set up what it needs. If you have a mandate to go least-privilege, then that needs to include only the permissions strictly needed by the current deployable. So, no "s3:*", you need each one listed.
So far so good, you can do this with a bootstrap script that you only need to run at project setup.
If you also have a mandate (effectively) to go fully serverless, then as your project evolves and you add functionality, what you find is that most interesting changes use something new in the platform. So you're not getting away with running the bootstrap script once. You're updating it and running it for almost every change. And you can't tell in advance what permissions you're going to need, because (especially if you're on terraform) there's apparently no documentation connecting the resources you want to manage and the permissions needed to do so. So you try to deploy your change, IAM pops an error or two, you try to figure out what permissions you need to add to the bootstrap script, you run it (fixing it when it breaks at this point), you try deploying again, IAM pops another couple of errors, and then you're in a grind cycle which you can't predict the length of - and you need to get to the end of it before you can even test your feature, because fully serverless means you can't run your application locally (and getting management to pay for the pro localstack licence is a dead end). At some point it won't be clear why IAM is complaining, because the error you get makes no sense whatsoever, so at that point it's off to support to find out a day later that ah, yes, you can't use an assumed role just there, it's got to be an actual role, and no, that's not written down anywhere, you've just got to know it, so you need to redesign how you're using the roles completely, and right about this point is when I usually want to buy a farm, raise goats, and get way too into oil painting, instead of whatever this insane waste of life is.