Comment by nimbius

3 years ago

you should not use wildcards or letsencrypt for internal authentication as its insecure for a few reasons.

0. implicit reliance on a network internet connection means any loss of ACME to the letsencrypt CA makes renewal of the cert or OCSP problematic. if the internet goes down, so does much of the intranet nonreliant upon it.

1. wildcard certs make setting up an attack on the network easier. you no longer need an issued cert for your malicious service, you just need to find a way to get/use the wildcard. you should know your services and SANs for the certs. these should be periodically audited.

1. Renewal is scripted to try every day for 30 days in advance with most common utilities. If lets encrypt and all other acme hosts are down for 30 days, I think you have bigger issues.

2. If you can't secure a wildcard cert, how does the same problem not apply to a root CA cert, which could also then do things like sign google.com certs that your internal users trust, which feels strictly worse. (I know there are cert extensions that allow restricting certs to a subdomain, but they're not universally supported and still scoped as wide as a wildcard cert).

  • If an organisation I work for requires me to trust their CA, that trust will go into a VM where the only things allowed to run are internal to the org. This will hamper my productivity, but only for a short time until my notice period runs out, at which point I will be working for another, saner organisation.

    • I don't go that extreme - my employer is free to install their own root CA on devices they own and supply.

      I understand some startups are a bit more "Go get your own computer". I think if they paid for it, it's still their device, but once you pay for it out of your own cash, yeah, mdm or root certs are a no go.

      2 replies →

  • OCSP is still a problem, as youll need to either proxy a local ocsp response during outages or disable validation entirely. microservices in an aws partial outage, for example, would suffer here.

    a root CA cert is stored in a gemalto or other boutique special HSM. it has an overwhelming security framework to protect it (if its ever online.) security officers to reset pins with separate pins, and an attestation framework to access its functions through 2 or more known agents with privileges separated. even the keyboard connected to the device is cryptographically authenticated against the hardware to which it connects.

    commonly your root is even offline, unavailable (locked in a vault) and only comes out for new issuing CA's.

    • > a root CA cert is stored in a gemalto or other boutique special HSM. it has an overwhelming security framework to protect it (if its ever online.) security officers to reset pins with separate pins, and an attestation framework to access its functions through 2 or more known agents with privileges separated. even the keyboard connected to the device is cryptographically authenticated against the hardware to which it connects.

      There are many organisations not large enough to justify this setup, for which Lets Encrypt is clearly safer than a custom root CA.

  • If you're making your own root cert, you should use name constraints and block the issuance to certain DNS names.

    https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1....

    https://wiki.mozilla.org/CA:NameConstraints

    Although... I have no idea if browsers/applications/openssl/etc actually verify this - but they should.

    (Disclaimer I work at LE)

    • > (I know there are cert extensions that allow restricting certs to a subdomain, but they're not universally supported and still scoped as wide as a wildcard cert).

      I even mentioned that in my post ;)

It seems like the easiest self-managed alternative is several orders of magnitude more complicated, though. Managing a local CA is trivial in a homelab, but pushing self-signed certs to every machine and service that needs them quickly grows quite complex as you need to manage more of them and they grow more heterogeneous. Every stinking system has a different CA management tool with different quirks and different permissions models, and the technological complexity can pale in comparison to the organizational complexity of getting access to the systems in the first place. If you even can: especially in the case of services, they might Just Not Work with private CAs, and now inventing a proxy service is part of your private-CA-induced workload. On top of that, if you want to do a comparably good job of certificate rotation and expiry notification to letsencrypt, you're going to need infrastructure to make it happen.

Is there a tool that solves (some of) this that I just don't know about?

I've seen big companies do it manually, but it's a full time job, sometimes multiple full time jobs, and the result still has more steady-state problems (e.g. people leaving and certs expiring without notification) than letsencrypt.

  • > Is there a tool that solves (some of) this that I just don't know about?

    There's a company called Venafi that makes a product that lives in this space. It tries to auto-inventory certs in your environment and facilitates automatic certificate creation and provisioning.

    From what I hear, it's not perfect (or at least, it wasn't as of a few years ago); yeah, some apps do wonky things with cert stores, so auto-provisioning doesn't always work, but it was pretty reliable for most major flavors of web server. And discovery was hard to tune properly to get good results. But once you have a working inventory, lifecycle management gets easier.

    I think it's just one of those things where, if you're at the point where you're doing this, you have to accept that it will be at least one person's full-time job, and if you can't accept that... well, I hope you can accept random outages due to cert expiration.

It really depends on your risk tolerance and capability.

I built out a PKI practice in a large, well-funded organization - even for us, it is difficult to staff PKI skill sets and commercial solutions are expensive. Some network dude running OpenSSL on his laptop is not a credible thing.

Using a public CA is nice as you may be able to focus more on the processes and mechanics adjacent to PKI. You can pay companies like Digicert to run private CAs as well.

The other risks can be controlled in other ways. For example, we setup a protocol where a security incident would be created if a duplicate private key was detected during scans that hit every endpoint at least daily.