← Back to context

Comment by nine_k

3 years ago

> Http/3 doesn't even have option to work without certificate authorities.

Unencrypted HTTP is dead for any serious purpose. Any remaining use is legacy, like code written in Basic.

With Letsencrypt on one hand, and single-binary utilities to run your own local CA on the other hand, this should pose no problem.

> this should pose no problem.

It poses a stack of problems a foot high.

Some random examples:

Docker, Kubernetes, etc... use HTTP by default. Not HTTPS or HTTP/3. Unencrypted HTTP 1.1! This is because containers are snapshots and can't contain certificates. Injecting certificates is a pain in the butt, because there is no standardised mechanism for it.

Okay! You inserted a certificate! For... what name? Is it the "site host name", or the "server name"? Either one you pick will be wrong for something. Many web apps expect to see a host header on the backend that matches the frontend, and will poop themselves if you give them a per-machine (or per-container) certificate. I've seen cloud load balancers that have the opposite problem and expect valid per-machine certificates!

If you pick per-machine certificates, then by definition you have to man-in-the-middle, which breaks a handful of apps that require (and enforce!) end-to-end cryptography.

Okay, fine, you have Let's Encrypt issuing per-site certificates, automatically, via your public endpoint. Nothing could be easier! Right up until someone in secops says that you also need make the non-production sites have "private endpoints". Now, you need two distinct mechanisms for certificate issuance, one internal only, and one public. Double the fun.

It just goes on and on: You'll also likely have to deal with CDNs, API gateways, Lambda/Functions, S3 / blob accounts, legacy virtual machines, management endpoints, infrastructure consoles, and so on. Some of these have integrated issuance/renewal capability, some don't. Some break because of your DNS CAA records. Some don't. Some send notifications before expiry, some don't. And so forth...

As a random example, I recently had to deal with a GIS product that shall not be named that requires a HTTPS REST API to set or change its certificates. Yes. You heard me. HTTPS. To set a valid certificate, you first have to automate against a HTTPS endpoint with an invalid certificate, restart the service, do a multi-minute wait in a retry loop, and then continue the automation. Failure to handle any one of the dozen failure scenarios and corner cases will lead to a dead service that won't start at all. Fun stuff.

Automated certificate issuance for complex architectures is definitely not a solved problem in general.

  • What are the downsides of using http/1.1 or unencrypted http2 from a docker container?

    I’m imagining an application server in a docker container talking to a load balancer, in the same data center. I can see some advantages to http2 (head of line blocking, header compression and multiplexing probably bring some performance benefits). But why do you want http3?

    • Most http/2 implementations enforce valid certificates, just like http/3.

      gRPC requires http/2.

      Some software like the aforementioned accursed GIS product refuse to work over unencrypted HTTP. They even ignore the load balancer headers like X-Forwarded-Proto just to be extra irritating.

  • From my experience with developing gRPC-based microservices, I don't remember certificates being such a big deal.

    Mount a filesystem subrtree with them inside a container; problem basically solved.

    • This isn’t even wrong, however you’ve confused the access of certificates with their issuance, validity and rotation for a given runtime, which is OP’s point: it’s very complicated.

      There are utilities like Let’s Encrypt and Kubernetes Cert Manager that make this somewhat easier by default if their defaults work for you. But the devil is in the details.