Comment by busterarm
4 years ago
Having participated in full on rearchitectures to Kubernetes several times at this point I can say that it wasn't justified a single time, even at the 10^4 microservice scale. Now having participated in several fairly-effortless rollouts of Nomad and arrived at a better place, it's funny to watch the rest of the industry cargo cult.
I actually don't believe that either of these solutions are going to be a long-term settled end state (though Nomad with Firecracker, or really any other Firecracker-centric solution that isn't k8s will have some legs) the industry falls on, but I agree wholeheartedly that Kubernetes is purely a distraction having seen the pain and the gnashing of teeth.
Every painless Kubernetes story that I've seen is at a scale where Kubernetes wasn't even necessary/justifiable and another solution would have been even simpler. But at least it's good for the resume.
And Helm charts are are akin K8s' own borgcfg. We're doomed to repeat our mistakes it seems.
To counter your argument. I did one full migration from AWS to GCP Kubernetes (GKE). The project was a huge success, simplifying our stack, deployments, logging, etc, etc.
We reduced our costs by 2/3rds, saving millions of years. Teams have been able to move onto feature work instead of maintaining custom deployment tooling. The Ops team 1/2 the size that it was before but is able to handle twice as many customers.
Can you elaborate more? Why it was not justified? The more detailed the better (If possible, concrete issues that you faced) ?
I've seen small startups waste their time with things like self-hosted Kubernetes in a quarter-rack of colo space for workloads they could have instead hosted with KVM or on cloud instances with 1/100ths of the management overhead. Scale that up to a half-dozen racks of servers and you're really still telling the same story. Even OpenStack of all things is easier to manage.
This covers the scale of the actual operations of 99% of companies. Probably some more 9's there afterwards. Go and read about how StackOverflow's infrastructure has developed over the years and how damned simple and effective it is.
If you aren't Fortune 100 or you don't have extremely specific performance needs for your (re)deployments, then it's highly likely that rolling out Kubernetes infrastructure is akin to driving screws with a sledgehammer.
I think part of this is the downside of capital being far too cheap for too long. Companies have way overbuilt their infrastructure for reasons that aren't really moving the needle forward. Many barely make an effort to control their costs. At most companies my expectation going in is to look at their infrastructure and see somewhere between 0.02 and 10% hardware utilization across everything they have. Even the companies running Kubernetes hardly seem to be doing a better job because very rarely is Kubernetes running 100% of their infrastructure, if even 10%.
Right, but they can achieve the same with managed kubernetes.
I am not sure what management overhead you are referring to? I.e. what do you "manage" (as human) if you choose a managed kubernetes offering [e.g. digital ocean or linode] (Vs open stack) ?
Also, I am not sure that hardware utilization issues has any relation to kubernetes? I.e. you would have this problem regardless?
In general , my own moto is that if you do not use kubernetes, you would end up writing it.
6 replies →
I am finally having to deal with Kubernetes, and the only thing positive I can say is that I miss WebSphere and WebLogic.