Comment by alexellisuk
6 years ago
Some thoughts / scenarios:
"Fine we will just pay" - I have a personal account then 4 orgs, that's ~ 500 USD / year to keep older OSS online for users of openfaas/inlets/etc.
"We'll just ping the image very 6 mos" - you have to iterate and discover every image and tag in the accounts then pull them, retry if it fails. Oh and bandwidth isn't free.
"Doesn't affect me" - doesn't it? If you run a Kubernetes cluster, you'll do 100 pulls in no time from free / OSS components. The Hub will rate-limit you at 100 per 6 hours (resets every 24?). That means you need to add an image pull secret and a paid unmanned user to every Kubernetes cluster you run to prevent an outage.
"You should rebuild images every 6 mo anyway!" - have you ever worked with an enterprise company? They do not upgrade like we do.
"It's fair, about time they charged" - I agree with this, the costs must have been insane, but why is there no provision for OSS projects? We'll see images disappear because people can't afford to pay or to justify the costs.
A thread with community responses - https://twitter.com/alexellisuk/status/1293937111956099073?s...
> Oh and bandwidth isn't free.
But neither is storage
> have you ever worked with an enterprise company? They do not upgrade like we do.
I'm sure someone somewhere is going to shed a tear for the enterprise organisations with shady development practices using the free tier who may be slightly inconvenienced.
Everyone should actually read the docker FAQ instead of assuming. This only applies t inactive images.
https://www.docker.com/pricing/retentionfaq
What is an “inactive” image? An inactive image is a container image that has not been either pushed or pulled from the image repository in 6 or months.
That may well be true, but now I have to pull images every 6 months, some I very much doubt I’ll ever upgrade but will pull anytime I format the relevant host.
It sucks that this isn’t for new images only. Now I have to go and retrospectively move my old images to a self hosted registry, update all my absolve scripts to the new uris, debug any changes, etc.
>"You should rebuild images every 6 mo anyway!" - have you ever worked with an enterprise company? They do not upgrade like we do.
No, but they've got cash and are not price sensitive. Wringing money out of them helps keep it cheap and/or free for everyone else.
Enterprise customers might as well fork over cash to docker rather than shudder Oracle.
Companies might base their image based on another image in the docker registry. That image might be good now, might be good in two years, but what if I want to pull a, say .NET Core 1.1 docker image in four years?
Now, .NET Core 1.1 might not be the best example, but I'm sure you can think of some example.
If you anticipate needing that image around in 4 years for a critical business case, you can either pull it once every 6 months from here on out, download the image and store it somewhere yourself, or make a fully reproducible Dockerfile for it so the image can be re-created later if it disappears from the registry.
Enterprises upgrade on a slower schedule, yes, but they still patch as quickly as everybody else.
Can you patch a docker image? Sort of, but it's easier to rebuild. And that's what they do.
> Enterprises upgrade on a slower schedule, yes, but they still patch as quickly as everybody else.
Hahahahahahaaaa!! No. Not in my experience.
2 replies →
I feel like the main response should be "OK, we'll just host our own Docker Registry."
This has been available as a docker image since the very beginning, which might not be good enough for everyone, but I think it will work for me and mine.
Agreed that self-hosting registries should be way more common than it is today and maybe even standard practice.
It's crazy easy to do; just start the registry container with a mapped volume and you're done.
Securing, adding auth/auth and properly configuring your registry for exposure to the public internet, though... The configuration for that is very poorly documented IMO.
EDIT: Glancing through the docs, they do seem to have improved on making this more approachable relatively recently. https://docs.docker.com/registry/deploying/
Note that for OSS images, that's a non-trivial thing to do—you have to have somewhere to run the image, and somewhere to store your images (e.g. S3), both of which are non-free, and would also require more documentation and less discoverability than Docker Hub offers.
GitHub Packages is free for public repositories so seems like a good option for OSS which likely have a GitHub presence already. https://github.com/features/packages
4 replies →
Last time I checked, GitLab offered a free Docker container registry for all projects.
A friend of mine offers Docker (and more) repository hosting for $5/mo. He is competent and friendly and I would recommend his product: https://imagecart.cloud/
> "You should rebuild images every 6 mo anyway!" - have you ever worked with an enterprise company? They do not upgrade like we do.
Good opportunity to sell a support contract. The point still stands - a six month old image is most likely stale.
Docker is doing the ecosystem a favor.
Apologies for the hand-waving, but is there a well-known community sponsored public peer-to-peer registry service, based on https://github.com/uber/kraken perhaps?
> Oh and bandwidth isn't free.
Neither is it for Docker...
It looks like if anyone pulls an image within 6 months, then the counter is reset. It seems like it's not too onerous to me—for any of the OSS images I've maintained, they are typically pulled hundreds if not thousands of times a day.
Sometimes I don't push a new image version (if it's not critical to keep up with upstream security releases) for many months to a year or longer, but those images are still pulled frequently (certainly more than once every 6 months).
I didn't see any notes about rate limiting in that FAQ, did I miss something?
The FAQ is a bit incomplete or trying to hide it. Section 2.5 of the TOS also introduced a pull rate provision. You can see it on the pricing page, https://www.docker.com/pricing
That's a bit confusing, is it max pulls per image per 6 hour period, per org, per user (which is weird since it's authenticated vs anonymous).
Honestly though 5 dollars a month isn't bad if you don't want to deal with hosting yourself.
1 reply →
> "Doesn't affect me" - doesn't it? If you run a Kubernetes cluster, you'll do 100 pulls in no time from free / OSS components. The Hub will rate-limit you at 100 per 6 hours (resets every 24?). That means you need to add an image pull secret and a paid unmanned user to every Kubernetes cluster you run to prevent an outage.
I can't find this. It's not in the original link, is it?
That information suddenly appeared(?) on their pricing page in the comparison table, near the bottom: https://www.docker.com/pricing
> "You should rebuild images every 6 mo anyway!" - have you ever worked with an enterprise company? They do not upgrade like we do.
A bunch of enterprises are going to get burned when say ubuntu:trusty-20150630 disappears.
It's not that they even have to rebuild their images... they might be pulling from one that will go stale.
> "We'll just ping the image very 6 mos" - you have to iterate and discover every image and tag in the accounts then pull them, retry if it fails. Oh and bandwidth isn't free.
Set up CircleCI or similar to pull all your images once a month :)
> Oh and bandwidth isn't free.
I'm not sure what protocol is used for pulling Docker images, but perhaps it could be enough to just initiate the connection, get Docker Hub to start sending data, and immediately terminate the connection. This should save bandwidth on both ends.