← Back to context

Comment by FireBeyond

1 month ago

I mean you can say that, but on the topic of rootless, regardless of "interest" at Docker, they did nothing about it. I was at Red Hat at the time, a PM in the BU that created podman, and Docker's intransigence on rootless was probably the core issue that led to podman's creation.

I've really appreciated RH's work both on podman/buildah and in the supporting infrastructure like the kernel that enables nesting, like using buildah to build an image inside a containerized CI runner.

That said, I've been really surprised to not see more first class CI support for a repo supplying its own Dockerfile and being like "stage 1 is to rebuild the container", "stage two is a bunch of parallel tests running in instances of the container". In modern Dockerfiles it's pretty easy to avoid manual cache-busting by keying everything to a package manager lockfile, so it's annoying that the default CI paradigm is still "separate job somewhere that rebuilds a static base container on a timer".

  • Yeah, I've moved on from there, but I agree. There wasn't a lot of focus on the CI side of things beyond the stuff that ArgoCD was doing, and Shipwright (which isn't really CI/CD focused but did some stuff around the actual build progress, but really suffered failure to launch).

    • My sense is that a lot of the container CI space just kind of assumes that every run starts from nothing or a generic upstream-supplied "stack:version" container and installs everything every time. And that's fine if your app is relatively small and the dependency footprint is, say, <1GB.

      But if that's not the case (robotics, ML, gamedev, etc) or especially if you're dealing with a slow, non-parallel package manager like apt, that upfront dependency install starts to take up non-trivial time— particularly galling for a step that container tools are so well equipped to cache away.

      I know depot helps a bunch with this by at least optimizing caching during build and ensuring the registry has high locality to the runner that will consume the image.

That's true, we didn't do much around it. Small startup with monetization problems and all.

  • So absolutely at least some of that is true.

    I’d be surprised if the systemd thing was not also true.

    I think it’s quite likely Docker did not have a good handle on the “needs” of the enterprise space. That is Red Hats bread and butter; are you saying they developed all of that for no reason?

    • I made no comment about RedHat's offerings.

      I don't feel like RedHat had to do anything to sell support contracts in this case, because that was already their business. All they had to do was say they'll include container support as part of their contracts.

      What they did do, AIUI based on feedback in the oss docker repos, is those contracts stipulated that you must run RHEL in the container and the host, and use systemd in the container in order to be "in support". So that's kind of a self-feeding thing.

      2 replies →