Comment by mystifyingpoi

4 days ago

Docker Compose (ignoring Swarm which seems to be obsolete) manages containers on a single machine. With Kubernetes, the pod that hosts the database is a pod like any other (I assume). It gets moved to a healthy machine when node goes bad, respects CPU/mem limits, works with generic monitoring tools, can be deployed from GitOps tools etc. All the k8s goodies apply.

When it comes to a DB moving the process around is easy, it's the data that matters. The reason bare-metal-hosted DBs are so fast is that they use direct-attach storage instead of networked storage. You lose those speed advantages if you move to distributed storage (Ceph/etc).

  • You don’t need to use networked storage, the zalando postgres operator just uses local storage on the host. It uses a StatefulSet underneath so that pods will stay on the same node until you migrate them.

    • But if I'm pinning it to dedicated machines then Kubernetes does not give me anything, but I still have to deal with its tradeoffs and moving parts - which from experience are more likely to bring me down than actual hardware failure.

      2 replies →