Comment by arijun

6 months ago

Is that not the premise of docker?

No it's the opposite, the entire premise of Docker over VMs is that you run one instance of all the OS stuff that's shared so it takes less resources than a VM and the portable images are smaller because they don't contain the OS image.

  • The premise is containerization, not necessarily particular resource usage by the host running the containers.

    For hosted services, you want to choose - is it worth running a single kernel with a lot of containers for the cost savings from shared resources, or isolate them by making them different VMs. There are certainly products for containers which lean towards the latter, at least by default.

    For development it matters a lot less, as long as the sum resources of containers you are planning to run don't overload the system.

    • The VM option is relatively new and the original idea was to provide that isolation without the weight of a VM. Also I'm not sure that docker didn't coin the word containerization, I've alway associated it with specifically the kind of packaging docker provides and don't remember it being mentioned around VMs.

  • On Windows containers you can chose if the kernel is shared across containers or not, it in only on Linux containers mode that the kernel gets shared.

Nope, docker uses the host's kernel, so there are zero additional kernels.

On non-Linux, you obviously need an additional kernel running (the Linux kernel). In this case, there are N additional kernels running.

  • > On non-Linux, you obviously need an additional kernel running (the Linux kernel).

    That seems to be true in practice, but I don't think it's obviously true. As WSL1 shows, it's possible to make an emulation layer for Linux syscalls on top of quite a different operating system.

    • I would draw the opposite conclusion from the WSL1 attempt.

      It was a strategy that failed in practice and needed to be replaced with a vm based approach.

      The Linux kernel have a huge surface area with some subtle behavior in it. There was no economic way to replicate all of that and keep it up to date in a proprietary kernel. Specially as the VM tech is well established and reusable.

  • > On non-Linux, you obviously need an additional kernel running (the Linux kernel)

    Only "obvious" for running Linux processes using Linux container facilities (cgroups)

    Windows has its own native facilities allowing Windows processes to be containerised. It just so happens that in addition to that, there's WSL2 at hand to run Linux processes (containerised or not).

    There is nothing preventing Apple to implement Darwin-native facilities so that Darwin processes would be containerised. It would actually be very nice to be able to distribute/spin up arbitrary macOS environments with some minimal CLI + CLT base† and run build/test stuff without having to spawn full-blown macOS VMs.

    † "base" in the BSD sense.