Comment by johnmaguire
1 day ago
The complaints I see about Kubernetes are typically more about one of two things: (a) this looks complex to learn, and I don't have a need for it - existing deployment patterns solve my use case, or (b) Kubernetes is much less inefficient than running software on bare-metal (energy or cost.)
Usually they go hand in hand.
Which is an interesting perspective, considering I've led a platform based on Kubernetes running on company-owned bare-metal. I was actually hired because developers were basically revolting at leaving the cloud because of all the "niceties" they add (in exchange for that hefty cloud tax) which essentially go away on bare-metal. The existing DevOps team was baffled why the developers didn't like when they were handed a plain Ubuntu VM and told to deploy their stack on it.
By the time I left, the developers didn't really know anything about how the underlying infrastructure worked. They wrote their Dockerfiles, a tiny little file to declare their deployment needs, and then they opened a platform webpage to watch the full lifecycle.
If you're a single service shop, then yeah, put Docker Compose on it and run an Ansible playbook via GitHub Actions. Done. But for a larger org moving off cloud to bare-metal, I really couldn't see not having k8s there to help buffer some of the pain.
For many shops, even Docker Compose is not necessary. It is still possible to deploy software directly on a VM/LXC container.
I agree that Kubernetes can help simplify the deployment model for large organizations with a mature DevOps team. It is also a model that many organizations share, and so you can hire for talent already familiar with it. But it's not the only viable deployment model, and it's very possible to build a deployment system that behaves similarly without bringing in Kubernetes. Yes, including automatic preview deployments. This doesn't mean I'm provided a VM and told to figure it out. There are still paved-path deployment patterns.
As a developer, I do need to understand the environment my code runs in, whether it is bare-metal, Kubernetes, Docker Swarm, or a single-node Docker host. It impacts how config is deployed and how services communicate with each other. The fact that developers wrote Dockerfiles is proof that they needed to understand the environment. This is purely a tradeoff (abstracting one system, but now you need to learn a new one.)
It can be inefficient because controllers (typically ~40 per cluster) can maintain big caches of resource metadata, and kubelet and kube-proxy usually operate pretty tight while-loops. But such things can be tuned and I don't really consider those issues. The main issue I've actually encountered is that etcd doesn't scale
The funniest thing is that kubernetes was designed for bare metal running, not cloud...
Yeah if someone says that k8s is costing them energy they are either using it very, very incorrectly, or they just don't know what they are talking about.
Running a Kubernetes deployment requires running many additional orchestration services that bare-metal deployments (whether running on-prem or in the cloud) do not.
2 replies →
Everything is about trading convenience for knowledge/know how.
It's up to the individual to choose how much knowledge they want to trade away for convenience. All the containers are just forms of that trade.
> (b) Kubernetes is much less inefficient than running software on bare-metal (energy or cost.)
You surely meant "much less efficient than"
I did, thanks for the correction.
There also seems to be confusion about what I meant by "bare-metal." I wasn't intending to refer to the server ownership model, but rather the deployment model where you deploy software directly onto an operating system.