← Back to context

Comment by jchw

7 hours ago

I have a lot of years of actual experience. Maybe not as much as you, but a good 12 years in the industry (including 3 at Google, and Google doesn't use Docker, it probably wouldn't be effective enough) and a lot more as a hobbyist.

I just don't agree. I don't find Docker too complicated to get started with at all. A lot of my projects have very simple Dockerfiles. For example, here is a Dockerfile I have for a project that has a Node.JS frontend and a Go backend:

    FROM node:alpine AS npmbuild
    WORKDIR /app
    COPY package.json package-lock.json .
    RUN npm ci
    COPY . .
    RUN npm run build

    FROM golang:1.25-alpine AS gobuilder
    WORKDIR /app
    COPY go.mod go.sum .
    RUN go mod download
    COPY . .
    COPY --from=npmbuild /app/dist /app/dist
    RUN go build -o /server ./cmd/server
    
    FROM scratch
    COPY --from=gobuilder /server /server
    ENTRYPOINT ["/server"]

It is a glorified shell script that produces an OCI image with just a single binary. There's a bit of boilerplate but it's nothing out of the ordinary in my opinion. It gives you something you can push to an OCI registry and deploy basically anywhere that can run Docker or Podman, whether it's a Kubernetes cluster in GCP, a bare metal machine with systemd and podman, a NAS running Synology DSM or TrueNAS or similar, or even a Raspberry Pi if you build for aarch64. All of the configuration can be passed via environment variables or if you want, additional command line arguments, since starting a container very much is just like starting a process (because it is.)

But of course, for development you want to be able to iterate rapidly, and don't want to be dealing with a bunch of Docker build BS for that. I agree with this. However, the utility of Docker doesn't really stop at building for production either. Thanks to the utility of OCI images, it's also pretty good for setting up dev environment boilerplate. Here's a docker-compose file for the same project:

    services:
      ui:
        image: node:alpine
        ports: ["5173:5173"]
        working_dir: /app
        volumes: [".:/app:ro", "node_modules:/app/node_modules"]
        command: ["/bin/sh", "-c", "npm ci && npx vite --host 0.0.0.0 --port 5173"]
      server:
        image: cosmtrek/air:v1.60.0
        ports: ["8080:8080"]
        working_dir: /app
        volumes: [".:/app:ro"]
        depends_on: ["postgres"]
      postgres:
        image: postgres:16-alpine
        ports: ["5432:5432"]
        volumes: ["postgres_data:/var/lib/postgresql/data"]
    volumes:
      node_modules:
      postgres_data:

And if your application is built from the ground up to handle these environments well, which doesn't take a whole lot (basically, just needs to be able to handle configuration from the environment, and to make things a little neater it can have defaults that work well for development), this provides a one-command, auto-reloading development environment whose only dependency is having Docker or Podman installed. `docker compose up` gives you a full local development environment.

I'm omitting a bit of more advanced topics but these are lightly modified real Docker manifests mainly just reformatted to fewer lines for HN.

I adopted Kubernetes pretty early on. I felt like it was a much better abstraction to use for scheduling compute resources than cloud VMs, and it was how I introduced infrastructure-as-code to one of the first places I ever worked.

I'm less than thrilled about how complex Kubernetes can be, once you start digging into stuff like Helm and ArgoCD and even more, but in general it's an incredible asset that can take a lot of grunt work out of deployment while providing quite a bit of utility on top.

Is there a book like Docker: The Good Parts that would build a thorough understanding of the basics before throwing dozens of ecosystem brand words at you? How does virtualisation not incur an overhead? How do CPU- and GPU-bound tasks work?

  • > How does virtualisation not incur an overhead?

    I think the key thing here is the difference between OS virtualization and hardware virtualization. When you run a virtual machine, you are doing hardware virtualization, as in the hypervisor is creating a fake devices like a fake SSD which your virtual machine's kernel then speaks to the fake SSD with the NVMe protocol like it was a real physical SSD. Then those NVMe instructions are translated by the hypervisor into changes to a file on your real filesystem, so your real/host kernel then speaks NVMe again to your real SSD. That is where the virtualization overhead comes in (along with having to run that 2nd kernel). This is somewhat helped by using virtio devices or PCIe pass-through but it is still significant overhead compared to OS virtualization.

    When you run docker/kubernetes/FreeBSD jails/solaris zones/systemd nspawn/lxc you are doing OS virtualization. In that situation, your containerized programs talk to your real kernel and access your real hardware the same way any other program would. The only difference is your process has a flag that identifies which "container" it is in, and that flag instructs the kernel to only show/allow certain things. For example "when listing network devices, only show this tap device" and "when reading the filesystem, only read from this chroot". You're not running a 2nd kernel. You don't have to allocate spare ram to that kernel. You aren't creating fake hardware, and therefore you don't have to speak to the fake hardware with the protocols it expects. It's just a completely normal process like any other program running on your computer, but with a flag.