Because I can go from main.go to a load balanced, autoscaling app with rolling deploys, segeregated environments, logging & monitoring in about 30 minutes, and never need to touch _any_ of that again. Plus, if I leave, the guy who comes after me can look at a helm chart, terraform module + pipeline.yml and figure out how it works. Meanwhile, our janq shell script based task scheduler craps out on something new every month. What started as 15 lines of "docker run X, sleep 30 docker kill x" is now a polyglot monster to handle all sorts of edge cases.
I have spent vanishingly close to 0 hours on maintaining our (managed) kubernetes clusters in work over the past 3 years, and if I didn't show up tomorrow my replacement would be fine.
If you can do all that in 30 minutes (or even a few hours), I would love to read an article/post about your setup, or any resources you might recommend.
I've just done it a dozen times at this point. Hello world from gin-gonic [0], terraform file with a DO K8s cluster [1] and load balancer, and CI/CD [2] on deploy. There's even time to make a cuppa when you run terraform.
We use this for our internal services at work, and the last time I touched the infra was in 2022 according to git
You still need to get mysql installed and configured though. On AWS, it's 30 lines of terraform for RDS on an internal subnet with a security group only allowing access from your cluster.
For that, you get automated backups, very simple read proxies, managed updates of you ever need them. You can vertically scale down, or uo to the point of "it's cheaper to hire a DBA to fix this".
Why wouldn't you use Kubernetes? There are basically 3 classes of deployments:
1) We don't have any software, so we don't have a prod environment.
2) We have 1 team that makes 1 thing, so we just launch it out of systemd.
3) We have between 2 and 1000 teams that make things and want to self-manage when stuff gets rolled out.
Kubernetes is case 3. Like it or not, teams that don't coordinate with each other is how startups scale, just like big companies. You will never find a director of engineering that says "nah, let's just have one giant team and one giant codebase".
On AWS, at least, there are alternatives such as ECS and even plain old EC2 auto scaling groups. Teams can have the autonomy to run their infrastructure however they like (subject to whatever corporate policy and compliance regime requirements they might have to adhere to).
Kubernetes is appealing to many, but it is not 100% frictionless. There are upgrades to manage, control plane limits, leaky abstractions, different APIs from your cloud provider, different RBAC, and other things you might prefer to avoid. It's its own little world on top of whatever world you happen to be running your foundational infrastructure on.
K8S has a credible local development and testing story, ECS and ASGs do not. The fact that there's a generic interface for load-balancer like things, and then you can have a different implementation on your laptop, in the datacenter, and in AWS, and everything ports, is huge.
Also, you can bundle your load balancer config and application config together. No written description of the load balancer config + an RPM file to a disinterested different team.
The alternatives aren't frictionless either; many items from that image are not specific to Kubernetes. I personally find AWS API's frustrating to use, so even if I were running a one-person shop (and was bound to AWS for some reason - maybe a warlock has cursed me?) I'd lean towards managing things from EKS to get an interface that fits my brain better. It's just preference, though - EC2 auto-scaling is perfectly viable if that's your jam.
The iceberg is fine, but using ECS doesn't absolve you from needing to care about monitoring, affinity, audit logging, OS upgrades, authentication/IAM, etc. That's generally why organizations choose to have infrastructure teams, or to not have infrastructure at all.
I have seen people rewrite Kubernetes in CloudFormation. You can do it! But it certainly isn't problem-free.
One giant codebase is fine. Monorepo is better than lots of scattered repos linked together with git hashes. And it doesn't really get in the way of each team managing when stuff gets rolled out.
This is my case. I’m one man show ATM so no DBA. I’m still using Kubernetes. Many things can be automated as simply as helm apply. Plus you get the benefit of not having a hot mess of systemd services, ad hoc tools which you don’t remember how you configured, plethora bash scripts to do common tasks and so on.
I see Kubernetes as one time (mental and time) investment that buys me somehow smoother sailing plus some other benefits.
Of course it is not all rainbows and unicorns. Having a single nginx server for a single /static directory would be my dream instead of MinIO and such.
This is a sweeping generalization to make, and I think you underestimate how easy it is to achieve redundancy with modern tools these days.
My company uses redundant services because we like to deploy frequently, and our customers notice if our API breaks while the service is restarted. Running the service redundantly allows us to do rolling deploys while continuing to serve our API. It’s also saved us from downtime when a service encounters a weird code path and crashes.
Uh? Even some larger startups don't have DBAs anymore. For better or for worse. Hell even the place I currently work in, which is not a startup at all has basically no DBA role to speak of.
Places get pretty big with no dedicated DBA resources these days. Last place I was at was a Fintech SaaS with 50 engineers and half a million paying customers.
Running off a couple of medium ( $3k/month each range ) RDS databases with failover setup. ECS for apps.
Databases looked after themselves. The senior people probably spent 20% of a FTE on stuff like optimizing it when load crept up.
Place before that was a similar size and no DBA either. People just muddled though.
Because it works, the infra folks you hired already know how to use it, the API is slightly less awful than working with AWS directly, and your manifests are kinda sorta portable in case you need to switch hosting providers for some reason.
Helm is the only infrastructure package manager I've ever used where you could reliably get random third party things running without a ton of hassle. It's a huge advantage.
Because I can go from main.go to a load balanced, autoscaling app with rolling deploys, segeregated environments, logging & monitoring in about 30 minutes, and never need to touch _any_ of that again. Plus, if I leave, the guy who comes after me can look at a helm chart, terraform module + pipeline.yml and figure out how it works. Meanwhile, our janq shell script based task scheduler craps out on something new every month. What started as 15 lines of "docker run X, sleep 30 docker kill x" is now a polyglot monster to handle all sorts of edge cases.
I have spent vanishingly close to 0 hours on maintaining our (managed) kubernetes clusters in work over the past 3 years, and if I didn't show up tomorrow my replacement would be fine.
If you can do all that in 30 minutes (or even a few hours), I would love to read an article/post about your setup, or any resources you might recommend.
I've just done it a dozen times at this point. Hello world from gin-gonic [0], terraform file with a DO K8s cluster [1] and load balancer, and CI/CD [2] on deploy. There's even time to make a cuppa when you run terraform.
We use this for our internal services at work, and the last time I touched the infra was in 2022 according to git
[0] https://github.com/gin-gonic/gin
[1] https://gist.github.com/donalmacc/0efbb0b377533232da3f776c60....
[2] https://docs.digitalocean.com/products/kubernetes/how-to/dep...
3 replies →
I spent zero hours on a MySQL server on bare hardware for seven years.
Admittedly, I was afraid of ever restarting as I wasn’t sure it would reboot. But still…
You still need to get mysql installed and configured though. On AWS, it's 30 lines of terraform for RDS on an internal subnet with a security group only allowing access from your cluster.
For that, you get automated backups, very simple read proxies, managed updates of you ever need them. You can vertically scale down, or uo to the point of "it's cheaper to hire a DBA to fix this".
1 reply →
You better invest some time in migrating away from your 5.7 (or earlier) in that case, because it's EOL already ;)
You'll need to touch it again. These paid services tend to change all the time.
You also need to pay them which is an event.
Why wouldn't you use Kubernetes? There are basically 3 classes of deployments:
1) We don't have any software, so we don't have a prod environment.
2) We have 1 team that makes 1 thing, so we just launch it out of systemd.
3) We have between 2 and 1000 teams that make things and want to self-manage when stuff gets rolled out.
Kubernetes is case 3. Like it or not, teams that don't coordinate with each other is how startups scale, just like big companies. You will never find a director of engineering that says "nah, let's just have one giant team and one giant codebase".
On AWS, at least, there are alternatives such as ECS and even plain old EC2 auto scaling groups. Teams can have the autonomy to run their infrastructure however they like (subject to whatever corporate policy and compliance regime requirements they might have to adhere to).
Kubernetes is appealing to many, but it is not 100% frictionless. There are upgrades to manage, control plane limits, leaky abstractions, different APIs from your cloud provider, different RBAC, and other things you might prefer to avoid. It's its own little world on top of whatever world you happen to be running your foundational infrastructure on.
Or, as someone has artistically expressed it: https://blog.palark.com/wp-content/uploads/2022/05/kubernete...
K8S has a credible local development and testing story, ECS and ASGs do not. The fact that there's a generic interface for load-balancer like things, and then you can have a different implementation on your laptop, in the datacenter, and in AWS, and everything ports, is huge.
Also, you can bundle your load balancer config and application config together. No written description of the load balancer config + an RPM file to a disinterested different team.
The alternatives aren't frictionless either; many items from that image are not specific to Kubernetes. I personally find AWS API's frustrating to use, so even if I were running a one-person shop (and was bound to AWS for some reason - maybe a warlock has cursed me?) I'd lean towards managing things from EKS to get an interface that fits my brain better. It's just preference, though - EC2 auto-scaling is perfectly viable if that's your jam.
The iceberg is fine, but using ECS doesn't absolve you from needing to care about monitoring, affinity, audit logging, OS upgrades, authentication/IAM, etc. That's generally why organizations choose to have infrastructure teams, or to not have infrastructure at all.
I have seen people rewrite Kubernetes in CloudFormation. You can do it! But it certainly isn't problem-free.
1 reply →
One giant codebase is fine. Monorepo is better than lots of scattered repos linked together with git hashes. And it doesn't really get in the way of each team managing when stuff gets rolled out.
I'm a big monorepo fan, but you run into that ownership problem. "It's slow to clone"; which team fixes that?
2 replies →
Google has one giant codebase. I am pretty sure the aren't the only ones.
This is my case. I’m one man show ATM so no DBA. I’m still using Kubernetes. Many things can be automated as simply as helm apply. Plus you get the benefit of not having a hot mess of systemd services, ad hoc tools which you don’t remember how you configured, plethora bash scripts to do common tasks and so on.
I see Kubernetes as one time (mental and time) investment that buys me somehow smoother sailing plus some other benefits.
Of course it is not all rainbows and unicorns. Having a single nginx server for a single /static directory would be my dream instead of MinIO and such.
I don’t push to implement Kubernetes until I had 100 engineers and a reason to use it.
I think a lot of startups have a set of requirements that is something like:
- I want to spin up multiple redundant instances of some set of services
- I want to load balance over those services
- I want some form of rolling deploy so that I don’t have downtime when I deploy
- I want some form of declarative infrastructure, not click-ops
Given these requirements, I can’t think of an alternative to managed k8s that isn’t more complex.
A startup with no DBA does not need redundant anything. Too small.
This is a sweeping generalization to make, and I think you underestimate how easy it is to achieve redundancy with modern tools these days.
My company uses redundant services because we like to deploy frequently, and our customers notice if our API breaks while the service is restarted. Running the service redundantly allows us to do rolling deploys while continuing to serve our API. It’s also saved us from downtime when a service encounters a weird code path and crashes.
Uh? Even some larger startups don't have DBAs anymore. For better or for worse. Hell even the place I currently work in, which is not a startup at all has basically no DBA role to speak of.
Places get pretty big with no dedicated DBA resources these days. Last place I was at was a Fintech SaaS with 50 engineers and half a million paying customers.
Running off a couple of medium ( $3k/month each range ) RDS databases with failover setup. ECS for apps.
Databases looked after themselves. The senior people probably spent 20% of a FTE on stuff like optimizing it when load crept up.
Place before that was a similar size and no DBA either. People just muddled though.
AWS Copilot (if you're on AWS). It's a bit like the older Elastic Beanstalk for EC2.
Because it works, the infra folks you hired already know how to use it, the API is slightly less awful than working with AWS directly, and your manifests are kinda sorta portable in case you need to switch hosting providers for some reason.
Helm is the only infrastructure package manager I've ever used where you could reliably get random third party things running without a ton of hassle. It's a huge advantage.
To make up for having a better schema in Terraform than in the database.
Because they are on AWS and can't use Cloud Run.