← Back to context

Comment by KronisLV

1 day ago

Docker is great development tooling (still some rough edges, of course).

Docker Compose is good for running things on a single server as well.

Docker Swarm and Hashicorp Nomad are good for multi-server setups.

Kubernetes is... enterprise and I guess there's a scale where it makes sense. K3s and similar sort of fill the gap, but I guess it's a matter of what you know and prefer at that point.

Throw on Portainer on a server and the DX is pretty casual (when it works and doesn't have weird networking issues).

Of course, there's also other options for OCI containers, like Podman.

> Docker Swarm

IS that a thing still?

> Kubernetes is... enterprise

I would contest that. Its complex, but not enterprise.

Nomad is a great tool for running processes on things. The problem is attaching loadbalancers/reverse proxies to those processes requires engineering. It comes for "free" with k8s with ingress controllers.

  • > IS that a thing still?

    Yeah, using it in production. If you don't need the equivalent of CRDs or other complex stuff like network meshes, it's stable and pretty okay! My ingress is just a regular web server image, for example.

    > It comes for "free" with k8s with ingress controllers.

    Ingress Controllers will keep working but the API is frozen, I think nowadays you're supposed to use Gateway instead: https://gateway-api.sigs.k8s.io/

  • Docker swarm is getting development again (caught something on their slack a few weeks back).

    I'd also contest the k8s is enterprise. Unless by enterprise you just mean over engineered in which case I agree.

  • I tried it out last year when I wanted to ditch our compose stuff and wanted to like it, but yeah, it seemed like it was mostly a zombie project. Plus it had a lot of sharp edges, IIRC. I forget what, exactly. Secrets? Ingress? Something like that.

> Docker is great development tooling (still some rough edges, of course).

Show me a Docker in use where build caching was solved optimally for development builds (like eg. make did for C 40 or 50 years ago)?

Perhaps you consider Docker layers one of the "rough edges", but I believe instant, iterative development builds are a minimum required for "great development tooling".

I did have great fun optimizing Docker build times, but more in the "it's a great engineering challenge to make this shitty thing build fast" sense.

  • A multi-stage Docker build where you separate pulling in dependencies from building the thing you want is as close as you're going to get.

    Something like the following works well in practice:

      1) pinned base image (e.g. Ubuntu LTS)
      2) your own custom base image in a registry rebuilt whenever you want (e.g. with tools you need for debugging or available across all of your images)
      3) your own runtime-specific base image, like a JDK one, can be used later both as a basis for development images with additional tooling, as well as for runtime images of your app
      4) your own runtime-specific development images, like one that's based on the JDK image above + Maven, alongside any other development tooling you need
      5) your multi-stage application image, where the first stage uses the development image to COPY in the dependency description files you need and then pull the dependencies, then does the build (layer cache takes care of reusing things where possible), and then the second stage is based on the runtime image (e.g. JDK) where you just copy your finished artifact (e.g. .jar file)
    

    If you don't need or want to build your own images, you can fold steps 1-4 into just using upstream images off of Docker Hub or whatever you prefer, but in practice it works pretty okay across numerous stacks. Of course, it's also possible to easily have very high standards in regards to what you mean as "optimal", so Docker probably won't live up to that.