Comment by benreesman
2 years ago
We had FB up to 6 figures in servers and a billion MAUs (conservatively) before even tinkering with containers.
The “control plane” was ZooKeeper. Everything had bindings to it, Thrift/Protobuf goes in a znode fine. List of servers for FooService? znode.
The packaging system was a little more complicated than a tarball, but it was spiritually a tarball.
Static link everything. Dependency hell: gone. Docker: redundant.
The deployment pipeline used hypershell to drop the packages and kick the processes over.
There were hundreds of services and dozens of clusters of them, but every single one was a service because it needed a different SKU (read: instance type), or needed to be in Java or C++, or some engineering reason. If it didn’t have a real reason, it goes in the monolith.
This was dramatically less painful than any of the two dozen server type shops I’ve consulted for using kube and shit. It’s not that I can’t use Kubernetes, I know the k9s shortcuts blindfolded. But it’s no fun. And pros built these deployments and did it well, serious Kubernetes people can do everything right and it’s complicated.
After 4 years of hundreds of elite SWEs and PEs (SRE) building a Borg-alike, we’d hit parity with the bash and ZK stuff. And it ultimately got to be a clear win.
But we had an engineering reason to use containers: we were on bare metal, containers can make a lot of sense on bare metal.
In a hyperscaler that has a zillion SKUs on-demand? Kubernetes/Docker/OCI/runc/blah is the friggin Bezos tax. You’re already virtualized!
Some of the new stuff is hot shit, I’m glad I don’t ssh into prod boxes anymore, let alone run a command on 10k at the same time. I’m glad there are good UIs for fleet management in the browser and TUI/CLI, and stuff like TailScale where mortals can do some network stuff without a guaranteed zero day. I’m glad there are layers on top of lock servers for service discovery now. There’s a lot to keep from the last ten years.
But this yo dawg I heard you like virtual containers in your virtual machines so you can virtualize while you virtualize shit is overdue for its CORBA/XML/microservice/many-many-many repos moment.
You want reproducibility. Statically link. Save Docker for a CI/CD SaaS or something.
You want pros handing the datacenter because pets are for petting: pay the EC2 markup.
You can’t take risks with customer data: RDS is a very sane place to splurge.
Half this stuff is awesome, let’s keep it. The other half is job security and AWS profits.
> We had FB up to 6 figures in servers and a billion MAUs (conservatively) before even tinkering with containers.
that would have been around the time when containers entered the public/developer consciousness, no?