Comment by nodesocket

2 days ago

Nice to know, though I wonder how many companies are really using all private images? I've certainly had a client running their own Harbor instance, but almost all others pulled from Docker Hub or Github (ghcr.io).

Lots of medical and governmental organisations are not allowed to run in public cloud environments. It's part of my job to help them get set up. However, in reality that often boils down to devs wining about adding a registry to Harbor to cache; nobody is going to recompile base images and read through millions of lines of third party code.

A lot of security is posturing and posing to legally cover your ass by following an almost arbitrary set of regulations. In practice, most end up running the same code as the rest of us anyway. People need to get stuff done.

I work on the container registry team at my current company running a custom container registry service!

  • How does this require a whole team? Unless you're working at a hyperscaler

    • Not a hyperscaler, but we’re multi-cloud and probably one to two steps down.

      My team’s service implements a number of performance and functionality improvements on top of your typical registry to support the company’s needs.

      I can’t say much more than that sadly.

    • Please describe their system for us, including system throughput, the hardware they're on, networking constraints, and how many people are allowed to be needed to operate it.

The Public Sector and anyone concerned with compliance under the Cyber Resilience Act should really use their own private image store. Some do, some don't.

I think once your eng org > 300 people and you have a dedicated infra and security team, it's going to be on their radar to do at some point.