Should you use Let's Encrypt for internal hostnames?

3 years ago (shkspr.mobi)

Several comments here mention running your own CA. Maybe that could be a signed intermediate CA with the Name Constraint extension [0] (and critical bit?), but one roadblock on this path is that allegedly Apple devices do not support that extension (edit: actually this was fixed! see reply). You there, @ LetsEncrypt?

To address the article a recent related discussion, "Analyzing the public hostnames of Tailscale users" [1], indicates in the title one reason you might not want to use LE for internal hostnames. There was a discussion about intermediate CAs there as well [2] with some more details.

[0]: https://news.ycombinator.com/item?id=29614971

  • Apple devices support the Name Constraint extension just fine. I've deployed a bunch of internal CA's with Name Constraint and Apple's macOS/iOS/iPadOS block certs that are signed for anything outside of the constraints. As is intended.

    AFAIK the Apple bug was fixed in macOS 10.13.3 from what I can find online. [1]

    [1]: https://security.stackexchange.com/questions/95600/are-x-509...

    • That's great to hear! I'd only heard secondhand, so I updated my comment to reflect this detail.

      Also I found https://bettertls.com publishes details about which TLS features are supported on different platforms over time, and it appears that the latest test in Dec 2021 shows most platforms support name constraints.

      With that roadblock evaporated, I think this would be the perfect solution to a lot of organization- and homelab-level certificate woes. I'd really like to hear from a domain expert on how feasible it would be to automate for free public certs, ACME-style.

      1 reply →

  • Wish it were as simple - ultimately having a name-constrained and publicly-trusted CA is the same as having any publicly-trusted CA and comes with a ton of wonderful burdens like audits.

    You're essentially running a public CA at that point, and that isn't easy.

    • This is not a technical limitation though. It's a policy limitation.

      In theory, a name-constrained intermediate for `.example.com` has no more authority and poses no greater risk than a wildcard leaf certificate for `.example.com`. In both cases the private key can be used to authenticate as any subdomain of `example.com`.

      But, name constraints are verified by relying parties (the clients and servers that are actually authenticating remote peers using certificates). It's hard to be certain that everything has implemented name constraints properly. This is, ostensibly and as far as I know, the reason CA/Browser forum hasn't allowed name constrained intermediates.

      At some point it probably makes sense to just pull the bandaid off.

      2 replies →

  • > Several comments here mention running your own CA.

    You know, i feel like more people wouldn't have a problem with actually doing this if it weren't so challenging and full of sometimes unpleasant CLI commands. To me openssl and similar packages to it feel like comparing the UX of tar vs docker CLIs, where the former is nigh unusable, as humorously explained here: https://xkcd.com/1168/

    In comparison, have a look at Keystore Explorer: https://keystore-explorer.org/screenshots.html

    Technically you can use it to run a CA, i guess, but in my experience it has mostly been invaluable when dealing with all sorts of Java/other keystores and certificates, as well as doing certain operations with them (e.g. importing a certificate/chain in a keystore, or maybe generating new ones, or even signing CSRs and whatnot).

    Sure, you can't automate that easily, but for something that you do rarely (which may or may not fit your circumstances), not struggling with the text interface but rather having a rich graphical interface can be really nice, albeit that's probably a subjective opinion.

    Edit: on an unrelated note, why don't we have more software that uses CLI commands internally that correspond to doing things in the GUI, but with the option to copy the CLI commands when necessary (say, the last/next queued command being visibile in a status bar at the bottom)? E.g. hover over a generate certificate button, get a copyable full CLI command in the status bar.

    Of course, maybe just using Let's Encrypt (and remembering to use their staging CA for testing) and just grokking DNS-01 is also a good idea, when possible. Or, you know, any other alternatives that one could come up with.

    • I never got why people think using tar is hard. Specify your archive File with f. want to eXtract it? add a x. want to Create it? add a c. Want it to be Verbose while doing that? add a v. if it's gZiped add a z. Granted, j for bzip2, t for listing is less obvious, but with that it's about everything you need for everyday usage and that more than suffices to disarm that bomb.

      2 replies →

    • I'm biased because I'm the founder of the company, but you should check out the certificate management toolchain (CA[1] and CLI[2]) we've built at smallstep. A big focus of the project is human-friendliness. It's not perfect (yet) but I think we've made some good progress.

      We also have a hosted option[3] with a free tier that should work for individuals, homelabs, pre-production, and even small production environments. We've started building out a management UI there, and it does map to the CLI as you've described :).

      [1] https://github.com/smallstep/certificates

      [2] https://github.com/smallstep/cli

      [3] https://smallstep.com/certificate-manager/

      2 replies →

We have an internal certificate authority for internal domains at my job. We add the root CA certificate to each desktop or server through an endpoint agent that runs on every machine. That agent is used for monitoring, provisioning users, and even running arbitrary commands.

The article mentions BYOD (bring your own device) but we don't allow personal devices to connect to internal services, so this isn't an issue for us.

You can use something like EasyRSA to set up an internal certificate authority and generate server certificates signed by that certificate authority. I started using plain old OpenSSL for generating certificates, which EasyRSA uses under the hood, but I would have liked to start by using EasyRSA in the first place.

By the way, EasyRSA still isn't that easy, but it's better than using OpenSSL directly.

  • > We have an internal certificate authority for internal domains at my job. We add the root CA certificate to each desktop or server through an endpoint agent that runs on every machine.

    One challenge to this is some software doesn't use the operating system's CA chain by default. A lot of browsers use their own internal one and ignore what the OS does (by default).

    • It is also troublesome when you have to manage cert loading not just on end devices but ephemeral VMs and containers as well.

    • The big-co I work for handles this via some tooling that checks for browsers and sees if the cert is installed, or by having the ca page signed regularly and having people self install. "Your site look wierd, likely you are missing the CA". It's not solved solved but it's mostly solved. The browsers that come with the image on the enterprise release cadence all have the cert. The people adding other browsers are usually devs or technically savvy enough to add a CA.

  • > By the way, EasyRSA still isn't that easy, but it's better than using OpenSSL directly.

    The trouble with EasyRSA (and similar tools) is that they make decisions for you and restrict what's possible and how. For example, I would always use name constraints with private roots, for extra security. But you're right about OpenSSL; to use it directly requires a significant time investment to understand enough about PKI.

    I tried to address this problem with documentation and templates. Here's a step by step guide for creating a private CA using OpenSSL, including intermediate certificates (enabling the root to be kept offline), revocation, and so on: https://www.feistyduck.com/library/openssl-cookbook/online/c... Every aspect is configurable, and here are the configuration templates: https://github.com/ivanr/bulletproof-tls/tree/master/private...

    Doing something like this by hand is a fantastic way to learn more about PKI. I know I enjoyed it very much. It's much easier to handle because you're not starting from scratch.

    Others in this thread have mentioned SmallStep's STEP-CA, which comes with ACME support: https://smallstep.com/docs/step-ca/getting-started That's definitely worth considering as well.

    EDIT The last time I checked, Google's CA-as-a-service was quite affordable https://cloud.google.com/certificate-authority-service AWS has one too, but there's a high minimum monthly fee. Personally, if the budget allows for it, I would go with multiple roots from both AWS and GCP for redundancy.

  • I have created a script, that mimics most of the modern CA and intermediate CA infrastructure for testing HTTPS/ Content Security Policy and more at OrgPad, where I work. TLS Mastery by Michael W Lucas https://mwl.io/nonfiction/networking#tls helped me a lot.

    Having an internal CA is a lot of work, if you want to do it properly and not just for some testing. It is still rather hard to setup HTTPS properly without resorting to running a lot of infrastructure (DNS/ VPN or some kind of public server), that you wouldn't need otherwise.

  • > but we don't allow personal devices to connect to internal services, so this isn't an issue for us.

    You now have a hard dependency from what snake oil you use to how you provision TLS certificates for your servers, congrats.

I will never understand the obsession people have with hiding their private server names.

If somebody gets any access to your local network, there are plenty of ways to enumerate them, and if they can't get access, what's the big deal?

I get that you may want to obfuscate your infrastructure details, but leaking infrastructure details on your server names is quite a red flag. It should really not happen. (Instead, you should care about the many, many ways people can enumerate your infrastructure details without looking at server names.)

  • Ingrained practices are the sort of thing that change one funeral at a time (see constant password rotation).

    It's a reasonable mitigation for certain environments and does leak information that makes structuring attacks easier, but it's certainly not a hard wall of any sort. The main problem for most people is articulating the realistic threat models they are trying to address and because that rarely resolves well assuming the conversation is had at all, there is little rational pushback against "everything and the kitchen sink" approaches based on whatever blog the implementer last read.

    Personally I tend to advocate assuming your attacker knows everything about you except specific protected secrets (keys, passphrases, unique physical objects) and working back from there, but that's a lot of effort for organizations where security is rarely anything but a headache for a subset of managers.

    You'll see similar opinions about things like port-knocking puzzles and consumer ipv4 NAT, which provide almost zero security benefit but do greatly reduce the incidence of spurious noise in logs.

  • One of the examples given wasn't a server name, it was leaking potentially confidential information via the domain olympics-campaign.staging.example.org - in many environments its fine if people know project names, but NDAs are a thing, and you could end up in hot water if you accidentally leak a partnership between two companies before it's been announced.

    • Well, if instead of making a lot of effort in hiding your names you just didn't, you wouldn't use a name like that.

      Every single person that connects to any of your networks (very likely the sandboxed mobile one too) can find that name. Basically no place hides it internally. There is very little difference between disclosing it to thousands of the people that care the most about you and disclosing it to everybody on the world.

      3 replies →

  • It is a perfectly valid concern. Internal domain names can contain confidential information. They become vectors for attack (especially if running vulnerable software). Obfuscation doesn't mean perfect security but it still goes a long way towards it.

  • I think many of us, myself included, have been conditioned to be paranoid—just because I can’t think of/don’t know of any way some data could be abused doesn’t mean I’m going to make it public.

  • It's mainly mitigating exposure. Some possible vulnerabilities would be social engineering(i.e. it'd be easier to send a targeted phishing URL to gain recon on an employee of a company if you know an internal domain), or injection into a public facing service that has access to internal services.

  • Security is not boolean. What’s local can be public some day. Everything should be disclosed on a need to know basis.

This seems like a perfect use case for wild card certs, especially if you have internal sites on a different (sub) domain from your prod servers. Yes, multiple servers have the same private key, but when the alternative is self-signed or no encryption, that is an easy trade off for me.

  • > perfect use case for wild card certs

    I don't like distributing wild card certs as you then have a bigger problem if the cert is leaked.

    When the cert is host specific you immediately know where the leak comes from and the scope of the leak is restricted.

    • Yes, the scope of the leak would be limited. But if a privkey.pem file from one of the hosts of my network is leaked, how do I “immediately” know which host the leak came from?

      1 reply →

  • I don't know how LE does it, but at least with DigiCert (and I assume other commercial CAs), servers sharing the same wildcard cert don't have to share a private key. You generate a separate CSR from each server, and then request a duplicate copy of the wildcard cert using that CSR. That way they can have different SANs as well.

    • When multiple CSRs [and thus multiple private keys] are involved you end up with multiple wildcard certificates. There is no sharing, technically speaking, but obviously the hostnames in all the wildcards are the same. However, that doesn't really buy you much in terms of security as any one of those wildcards can be used in an active network attack against any matching service if compromised.

      That is, unless you're using some sort of public key pinning, but that's very rare to find today and works only in a custom application or something that supports DNSSEC/DANE.

      3 replies →

    • Wildcard certs are (only?) issued from DNS-01 challenges. As long as the requester can satisfy the DNS challenge ACME doesn't care about key uniqueness.

      6 replies →

My company uses Let's Encrypt extensively for many thousands of customers edge devices which live in their own LAN. As long as the hostnames are random or at least not too telling there's pretty much nothing that you're leaking. Except for the internal IP address (10.x, 192.x,) and how many servers you have. If you can live with that then it's perfectly fine.

I wrote about it a few years ago: https://blog.heckel.io/2018/08/05/issuing-lets-encrypt-certi...

I read somewhere a while ago that LE are working on what’s called “intermediate CA” [0] which would solve the problem. Apparently from a regulatory standpoint there are some questions around abuse that need to be answered before they can go ahead. The basic idea is that you can issue your own certificates based on the LE CA that is already recognised by the browsers.

EDIT [0] https://community.letsencrypt.org/t/does-lets-encrypt-offer-...

  • We're a long ways away from name-constrained intermediaries being viable from a regulatory and technical perspective. I'd explain, but commenter in a thread linked to the one you posted has a pretty detailed explanation already: https://community.letsencrypt.org/t/sign-me-as-an-intermedia...

    • From that it looks like the main issue is regulatory requirements that force CAs to log all issued certificates via CT (certificate transparency) logs. Given that this is the very thing we're trying to avoid with a private CA ("CT" and "leaking internal hostnames" are functionally synonymous) we seem to be at an impasse at the level of base requirements.

      Maybe an IP constraint that restricts certs to only be valid in private IP spaces (10.*, 192.168.1.*, etc)?

      5 replies →

I've used https://smallstep.com/docs/step-ca/ as a CA internally, works well.

  • What I'd want is an internal CA, like step-ca, but have the certificates signed by a "real" CA, so I don't have to distribute my own root CA certificate.

    • The dream would truly be an internal CA backed by a publicly trusted subordinate cert (limited to the domain you control). But afaik that can’t happen until the Name Constraint Extension is enforced by “all” clients.

      2 replies →

    • You really don't actually want this. This intermediate CA would still be subject to the same extensive CAB Forum / vendor root program requirements (audited yearly via WebTrust) as a root CA. There are a ton of requirements, including mandatory response times, that inevitably makes this require a fully staffed team to operate.

    • That would be a violation of the real CA's duty to only sign certs that they have some basis for believing are correct. (This basis almost always boils down to "controls the DNS".)

    • Wouldn't that allow you to issue certificates for Google.com? Correct me if I've misunderstood but for the sake of discussion pretend cert pinning doesn't exist, use another example domain if it's easier

      2 replies →

  • I've been using it too and it works well, particularly with Caddy to do automatic certificates with ACME where possible

    Plus all my services go through Tailscale, so although I am leaking internal hostnames via DNS, all those records point to is 100.* addresses

  • This is the correct answer ;)

    If you're going to run a serious internal network, you'll need the basic things like NTP, DNS, a CA server, and, yes, some kind of MDM to distribute internal CA certificates to your people. The real PITA is when you don't have these in place.

  • I've been setting this up for my homelab under .home.arpa, seems to work pretty well so far.

you should not use wildcards or letsencrypt for internal authentication as its insecure for a few reasons.

0. implicit reliance on a network internet connection means any loss of ACME to the letsencrypt CA makes renewal of the cert or OCSP problematic. if the internet goes down, so does much of the intranet nonreliant upon it.

1. wildcard certs make setting up an attack on the network easier. you no longer need an issued cert for your malicious service, you just need to find a way to get/use the wildcard. you should know your services and SANs for the certs. these should be periodically audited.

  • 1. Renewal is scripted to try every day for 30 days in advance with most common utilities. If lets encrypt and all other acme hosts are down for 30 days, I think you have bigger issues.

    2. If you can't secure a wildcard cert, how does the same problem not apply to a root CA cert, which could also then do things like sign google.com certs that your internal users trust, which feels strictly worse. (I know there are cert extensions that allow restricting certs to a subdomain, but they're not universally supported and still scoped as wide as a wildcard cert).

    • If an organisation I work for requires me to trust their CA, that trust will go into a VM where the only things allowed to run are internal to the org. This will hamper my productivity, but only for a short time until my notice period runs out, at which point I will be working for another, saner organisation.

      4 replies →

    • OCSP is still a problem, as youll need to either proxy a local ocsp response during outages or disable validation entirely. microservices in an aws partial outage, for example, would suffer here.

      a root CA cert is stored in a gemalto or other boutique special HSM. it has an overwhelming security framework to protect it (if its ever online.) security officers to reset pins with separate pins, and an attestation framework to access its functions through 2 or more known agents with privileges separated. even the keyboard connected to the device is cryptographically authenticated against the hardware to which it connects.

      commonly your root is even offline, unavailable (locked in a vault) and only comes out for new issuing CA's.

      1 reply →

  • It seems like the easiest self-managed alternative is several orders of magnitude more complicated, though. Managing a local CA is trivial in a homelab, but pushing self-signed certs to every machine and service that needs them quickly grows quite complex as you need to manage more of them and they grow more heterogeneous. Every stinking system has a different CA management tool with different quirks and different permissions models, and the technological complexity can pale in comparison to the organizational complexity of getting access to the systems in the first place. If you even can: especially in the case of services, they might Just Not Work with private CAs, and now inventing a proxy service is part of your private-CA-induced workload. On top of that, if you want to do a comparably good job of certificate rotation and expiry notification to letsencrypt, you're going to need infrastructure to make it happen.

    Is there a tool that solves (some of) this that I just don't know about?

    I've seen big companies do it manually, but it's a full time job, sometimes multiple full time jobs, and the result still has more steady-state problems (e.g. people leaving and certs expiring without notification) than letsencrypt.

    • > Is there a tool that solves (some of) this that I just don't know about?

      There's a company called Venafi that makes a product that lives in this space. It tries to auto-inventory certs in your environment and facilitates automatic certificate creation and provisioning.

      From what I hear, it's not perfect (or at least, it wasn't as of a few years ago); yeah, some apps do wonky things with cert stores, so auto-provisioning doesn't always work, but it was pretty reliable for most major flavors of web server. And discovery was hard to tune properly to get good results. But once you have a working inventory, lifecycle management gets easier.

      I think it's just one of those things where, if you're at the point where you're doing this, you have to accept that it will be at least one person's full-time job, and if you can't accept that... well, I hope you can accept random outages due to cert expiration.

  • It really depends on your risk tolerance and capability.

    I built out a PKI practice in a large, well-funded organization - even for us, it is difficult to staff PKI skill sets and commercial solutions are expensive. Some network dude running OpenSSL on his laptop is not a credible thing.

    Using a public CA is nice as you may be able to focus more on the processes and mechanics adjacent to PKI. You can pay companies like Digicert to run private CAs as well.

    The other risks can be controlled in other ways. For example, we setup a protocol where a security incident would be created if a duplicate private key was detected during scans that hit every endpoint at least daily.

Can lets encrypt issue multiple wildcard certs for different subdomains like *.banana.example.com and *.grapefruit.example.com

Then you could give each server a different wildcard cert without exposing the full name to the certificate log: exchange.banana.example.com log4j.grapefruit.com

Ugly, but functional.

Alternatively should the certificate transparency log rules be changed to not include the subdomain? Maybe what matters is that you know that a certificate has been issued for a domain, when, and that you have a fingerprint to blacklist or revoke. Knowing which actual subdomain a certificate is for is very convenient, but is it proportionate?

  • > Alternatively should the certificate transparency log rules be changed to not include the subdomain? Maybe what matters is that you know that a certificate has been issued for a domain, when, and that you have a fingerprint to blacklist or revoke. Knowing which actual subdomain a certificate is for is very convenient, but is it proportionate?

    That was a big debate in the CA/B Forum when CT was created; the current behavior is a deliberate choice on the part of the browser developers, which they will probably not want to revisit.

Running your own private CA is a great way to cause problems for yourself down the road (just ask anyone with a 5 year and 1 day old Kubernetes cluster). But I also don't want to be dependent on a 3rd party for my internal services. I want a better solution: not as annoying as a private CA, and not dependent on 3rd parties.

I want to deploy apps that use certs that don't expire. When they should be rotated, I want to do them on my own time. And I want a standard method to automatically replace them when needed, that is not dependent on some cron job firing at the correct time or everything breaks.

Cert expiration is a ticking time bomb blowing up my services just because "security best practice" says an arbitrary, hard expiration time is the best thing. Security is not more important than reliability. For a single external load balancer for a website, we deal with it. But when you have thousands of the little bastards in your backend, it's just ridiculous.

  • > Security is not more important than reliability.

    Yes, it is. In most cases Confidentially > Integrity > Availability. Systems should fail-safe.

    There are some scenarios such as medical devices where integrity or availability trump confidentiality. But most information systems should favor going offline to prevent a breach of confidentiality or data integrity.

I've done it with a few key services like Home Assistant, using split-horizon DNS, and considered it less than ideal.

However the alternatives suck as far as I know. I don't want to install my own CA certificate on all the various devices in the home, for instance, and keeping that up to date.

With browsers making self-signing a PITA, what choices do I have?

This is one area where I think AWS does a huge disservice by making their Private CA so expensive ($400 a month + cost of certificates). This ends up pushing people to use public domains instead of private ones, or relying on other solutions outside of AWS. If the cloud companies would make it as easy to get private certificates as public ones you wouldn't see as many issues like this.

My life experience has taught me that it's better to have an imperfect, but simple solution with known limitations (in this case LetsEncrypt), than an ideal solution that you can't configure correctly and do not fully understand (internal CA for a small team).

The former give you known limitations, the latter work fine for a while and you get a great feeling, and then disaster strikes out of the blue.

The same problem plagues IoT solutions and home networking - there are no industry-accepted frameworks to enable encryption on Lan like we do on the real internet. Thrre is no way to know that I connect to my home router or NAS when i type in it's address.

This is an area where we have kind of failed as an industry

> But there is a downside. The CT logs are public and can be searched. Firstly, [...]

This bit me recently. I have a certificate for homelab.myname.com, and as any public-facing IP address, I get the expected brute force ssh login attempts for users 'root', 'git', 'admin', etc...

But I was terrified (until I remembered about the public cert) to find attempts for users 'homelab' and 'myname' -- which, being my actual name, actually corresponds to a user.

It's obviously my fault for not thinking this through, and it's not a terrible issue, but thinking I was under a targeted attack was quite the scare!

Sadly, the answer is probably no (for the information leakage mentioned in the article).

But having an internal (even ACME API-supporting) CA is no walk in the park either. If you can swallow the trade off and design with publicly-known hostnames, I would highly recommend it.

There’s always some annoying device/software/framework requiring their own little config dance to insert the root cert. Like outbound-proxy configuration, but almost worse.

I don’t even want to imagine what would happen if/when the root key needs to be rotated due to some catastrophic HSM problem.

  • > Sadly, the answer is probably no (for the information leakage mentioned in the article).

    Eh, even in large organisations of expert IT users, the internal CA ends up training users to ignore certificate warnings.

    Sure, maybe the certificate is set up right on officially issued laptops - but the moment someone starts a container, or launches a virtual machine, or uses some weird tool with its own certificate store, or has a project that needs a raspberry pi, or the boss gets himself an ipad? They'll start seeing certificate errors.

    IMHO the risks created by users learning to ignore warnings are much greater than the risks from some outsider knowing that nexus.example.com exists.

I run my own internal CA.

Would not recommend to anyone that they use publicly-valid letsencrypt certs for internal hostnames, since certificate issuance transparency logs are public and will expose all of the hostnames of your internal infrastructure.

Don't say that! I managed to sign up to Starbucks Rewards before it launched here in New Zealand by looking at the staging certificates that were issues ;)

Lots of fun stuff is possible but yeah, it's definitely something you should consider. Let's Encrypt allows wildcart certs from memory so you should probably use one of those per subdomain.

Why not just be your own signing authority for internal domains? You can propagate your toplevel public cert with most enterprise network provisioning tools.

  • Not only is running your own CA a pain, there is also minimal support for restricting CA scope validity, so anyone that needs to communicate with you effectively ends up trusting your CA for anything and everything. For most anyone except your own trusting partners or coworkers that's a complete non-starter.

  • Running your own PKI is fairly straightforward, particularly with tools like cfssl at your disposal.

    But running your own PKI properly is quite hard.

    Let's Encrypt gives you top tier PKI management for $0.

    • A business case for Let's Encrypt is to support internal hosts which are not visible on the internet (Let's Encrypt can check that) and omit the hostnames from the Certificate Transparency Logs.

      Let a business pay $100/year for 10 internal hostnames.

      2 replies →

    • > Let's Encrypt gives you top tier PKI management for $0.

      Ok, but it fails at one of the requirements.

I'm using a wildcard certificate and CNAME records to interal hostnames, it works pretty nicely for my use case. I don't need to leak out a map of my hostnames and I don't need to do a full split-horizon DNS.

So if I want to encrypt traffic to "service1.example.com", "service2.example.com" and "service3.example.com" that all run on server A, I'll make three CNAME records that all point to "server-a.internal", and I'll just resolve "server-a.internal" in my local network. Obviously, anyone can query what "service1.example.com" points to, but they won't figure out anything beyond "server A".

> OK, so you decide to have an internal DNS - now the whole world knows you have doorbell-model-xyz.myhome.example.com!

Uhm, or you use split horizon DNS? Who in their right mind would leak all their internal DNS names into a public DNS zone?

  • Sorry for the poor wording on my part. I meant that if you issue a LE Cert for your doorbell, and give it a "sensible" name, the name will appear in the CT Log.

  • That's in the article, Let's encrypt leaks them for you, if you use them for your intranet.

  • Named certs have the hostnames they’re valid for in the Certificate itself.

    “View Certificate” in a browser, or openssl sclient on cli will show you.

I just finished writing a long proposal: https://github.com/WICG/proposals/issues/43

PKI is fairly awful and bad for internal anything, unless you have a full IT team and infrastructure.

A much simpler solution would be URLs with embedded public keys, with an optional discover and pair mechanism.

Browsers already have managed profiles. Just set them up with a trusted set of "paired" servers and labels, push the configs with ansible(It's just like an old school hosts file!), and don't let them pair with anything new.

If you have a small company of people you trust(probably a bad plan!), or are a home user, just use discovery. No more downloading an app to set up a smart device.

The protocol as I sketched it out(and actually prototyped a version of) provides some extra layers of security, you can't connect unless you already know the URL, or discovery is on and you see it on the same LAN.

We could finally stop talking to our routers and printers via plaintext on the LAN and encrypt everywhere.

We already use exactly this kind of scheme to secure our keyboards and mice, with discovery in open air not even requiring being on the same LAN.

We type our sensitive info into Google docs shared with an "anyone with this URL" feature.

It seems we already trust opaque random URLs and pairing quite a bit. So why not trust them more than the terrible plaintext LAN services we use now?

So people seem to be conflating the requirements of a CA (the thing that signs certificates and is considered the authority) and an RA (the registration authority).

Running a CA that issues certificates isn't that hard. There are off-the-shelf solutions and wraparounds for openssl as well.

Running an RA is hard. That's the part that has to check who is asking for a certificate and whether they're authorized to get one and what the certificate restrictions etc are.

Then there's the infrastructure issue on the TLS users (clients & servers) that need to have the internally trusted root of the CA installed and need the RA client software to automagically request and install the necessary leaf and chain certificates.

AWS has private CAs for $400/month, but if you want a root and then some signing intermediates, that's $400 for each (effectively the PCA is just a key stored in an AWS HSM and an API for issuing certificates).

A real HSM will cost roughly a year of that service, but the management of that hardware and protecting it and all the rigmarole around it is very expensive.

Every mobile phone and most desktops have a TPM that could be used for this, but having an API to access it in a standard way isn't that available.

DNS names are public by nature. Split horizon, private roots, private CAs etc are a sign you are trying to bend things backwards. Just don't use sensitive DNS names.

  • disagree on that - it's entirely possible to have an openssl private root CA and private DNS that doesn't talk to the internet at all and exists in RFC1918 IP space with no gateway or route to the outside world. not just a matter of ACLs on things like DNS servers but those same servers/VMs not even having interfaces that have any way to get traffic to a global routing table.

    split horizon I agree is risky.

Maybe a dump question, but was it so, that it is not only Let's Encrypt that uses Certificate Transparency Log, but all the other providers too?

If so, then the decision is more like, whether to use a public or private certificate for an internal service.

  • Yep, we recently moved from DigiCert to LE and someone was alarmed at the certificate transparency logs, until we scrolled down the page to reveal the same logs from DigiCert.

    Wildcards hide it somewhat, but DigiCert charges per subdomain now, and every user thinks they need their own subdomain for some reason. So LE it is.

This is an interesting topic, for me.

I write iOS apps, and iOS requires that all internet communications be done with HTTPS.

It is possible to use self-signed certs, but you need to do a bit of work on the software, to validate and approve them. I don't like doing that, as I consider it a potential security vector (you are constantly reading about development code that is compiled into release product, and subsequently leveraged by crooks).

I am working on a full-stack system. I can run the backend on my laptop, but the app won't connect to it, unless I do the self-signed workaround.

It's easier for me to just leave the backend on the hosted server. I hardly ever need to work on that part.

  • For the project I'm working on currently, I use Charles Proxy's "Map Remote" function to map our UAT server's HTTPS url to my local machines HTTP URL.

    Also ngrok.com works really well if you need to give other people access to your dev environment.

    • > I use Charles Proxy's "Map Remote" function to map our UAT server's HTTPS url to my local machines HTTP URL.

      This looks really interesting. Thanks! I'll see if I can get away with it.

  • If you create a custom SSL CA, you can add that CA to your ios devices and simulators, and they will trust your backend served with an SSL certificate issued by your custom CA, no app modifications needed. (On modern Android, this does not work out of the box - it requires the custom SSL CA fingerprints to be added to a network configuration file embedded in the app - but you could always use gradle flavors and only add it to your debug/development builds)

  • > I write iOS apps, and iOS requires that all internet communications be done with HTTPS

    What if the app is on the same network as the server?

    I've got a Denon A/V receiver that has an HTTP interface and the Denon iOS app is able to talk to it. I've watched this via a packet sniffer and it definitely is using plain HTTP.

    • > I've got a Denon A/V receiver that has an HTTP interface and the Denon iOS app is able to talk to it. I've watched this via a packet sniffer and it definitely is using plain HTTP.

      That's interesting. I wonder why Apple let that go by. I've had apps rejected, because they wouldn't use HTTPS. Maybe it's OK for a local WiFi connection. Even then, Apple has been fairly strict.

      That said, I think that there are ways to register for exceptions.

So the problem is our naming scheme is insecure so we ask untrustworthy 3rd-party entities to vet our certificates. The CA mafia isn't gonna give up their hard-earned monopoly easily (remember CACert?), and most client companies are happy to have for-profit CAs for insurance/policy compliance. Something like DNSSEC+DANE[0] is more reasonable but unfortunately unsupported by most programs.

[0] https://datatracker.ietf.org/doc/html/rfc6698

  • No, the reasoning here is broken. Even with a secure naming scheme, you'd still need certificates, because you have to verify the authenticity of the secure channel you bring up (usually TLS), not just the security of the name. Any way you slice it, you end up with a third party vouching for your TLS certificate.

    • In the case of DNSSEC keys are distributed with the zone, so the trust anchor is the DNS root. Of course, your parent zone could lie about your keys (just like it could lie about your other records), but don't you think since DNS is already an attractive attack vector (as it can vouch for CAs to publish a trusted certificate), relying on it exclusively for certificate distribution would reduce the overall attack surface?

      I'm not saying no trusted parties is the end goal (though Tor's onion or the GNU Name System work in this area), but maybe giving dozens of corporations/institutes the power to impersonate your server (from a client UX perspective) isn't the best we can do.

      3 replies →

A public CA is for having a third-party entity so two different parties do not need to trust each other. So, the answer is no. Why would you even consider this for internal communication?

  • Installing a root CA on devices is risky.

    From the article:

    > It means your employees aren't constantly fighting browser warnings when trying to submit stuff internally.

    If your employees gets a habit of ignoring certificate warnings then you have much bigger problems than leaking internal domain names.

It's possible to configure DNS to make a public domain point to an internal IP and register a certificate for that domain.

For example, you can register a certificate for local.yourcompany.com and point local.yourcompany.com to 127.0.0.1 to get HTTPs locally. The same could be done for internal network IPs.

It wouldn't work well with Let's Encrypt because their bot would just end up talking to itself in this scenario.

Of course you could also use my side project (expose.sh) to get a https url in one command.

Is it that hard to setup an internal CA? I have no idea what I'm doing, and I managed one for years until we moved offices and ditched our LAN.

  • The hard part is getting the root certificate in the trust store on every device in your organization.

    • Worse, it is often not the trust store on every device. It is often multiple trust stores on a device.

      The OS might have one. Each browser might have its own. For a developer, each language they use might need separate configuration to get its libraries to use the certificate.

  • That should worry the hell out of you.

    If you could install CAs only for a certain domain (default to the name constraints but actually set in the browser/Os) that would be fine, but installing a CA gives anyone with access to that CA the ability to make pretty much any valid cert, and your potential lack of security raises flags

I like the wildcard certificates option, however I have not been able to find an easy solution to distribute those certificates to every host I have internally. Is this usually done manually? is there some equivalent to acme.sh?

The kind of hosts I have are OPNSense router, traefik servers, unifi controller etc.

  • My method is manual-ish¹. One VM is in charge of getting the wildcard certificates. Other than answering DNS requests for validation and SSH it has no public face.

    Each other machine regularly picks up the current outputs from there via SFTP weekly and restarts what-ever services. I'm not running anything that I need near-perfect availability on ATM, so it is no more complex than that. If wanting to avoid unnecessary service restarts check the for changes and only do that part if needed, and/or use services that can be told top reload certs without a restart.

    This does mean I'm using the same key on every host. If you want to be (or are required to be) more paranoid than that then this method won't work for you unmodified and perhaps you want per-name keys and certs instead of a wildcard anyway. For extra carefulness you might even separate the DNS service and certificate store onto different hosts.

    Not sure how you'd do it with unifi kit, my hosts are all things I can run shell scripts from cron on running services like nginx, Apache, Zimbra, … that I can configure and restart via script.

    [1] “manual” because each host has its own script doing the job, “ish” because once configured I don't need to do anything further myself

  • > acme.sh

    Another shell-based ACME client I like is dehyradted. But for sending certs to remote systems from one central area, perhaps the shell-based GetSSL:

    > Obtain SSL certificates from the letsencrypt.org ACME server. Suitable for automating the process on remote servers.

    * https://github.com/srvrco/getssl

    In general, what you may want to do is configure Ansible/Puppet/etc, and have your ACME client drop the new cert in a particular area and have your configuration management system push things out from there.

  • At my last job I implemented the certificate generation as a scheduled job, which pushes the generated certificates to a private S3 bucket.

    Then, our standard Ansible playbooks set up on each node a weekly systemd timer which downloads the needed certificates and restarts or reloads the services.

  • If you have root ssh on each machine you can make rsync cron jobs. Imo it's reasonably secure if you spend the time setting up ssh keys and disabling password auth after.

Interesting question. Quite complex. IMHO there is no clear right or wrong here.

Another nuisance is that unencrypted port 80 must be open to the outside world to do the acme negotiation (LE servers must be able to talk to your acme client running at the subdomain that wants a cert). They also intentionally don't publish a list of IPs that LetsEncrypt might be coming from [1]. So opening firewall ports on machines that are specifically internal hosts has to be a part of any renewal scripts that run every X days. Kinda sucks IMO.

[1]https://letsencrypt.org/docs/faq/#what-ip-addresses-does-let...

UPDATE: Apparently there is a DNS based solution that I wasn't aware of.

Internally even small companies should be doing some PKI for device enrollment and email security. If you manage your own dns you need a pki infra as well imo

I bought an extra domain for our internal network.

$COMPANY_SHORT_FORM.network

Works really well and wie no longer have the issue of deploying root certs in devices.

Last time I considered using it, it was a pia for IIS servers because you have to manually renew every 30 days. Has this changed?

> The only real answer to this is to use Wildcard Certificates. You can get a TLS certificate for *.internal.example.com

Does Let's Encrypt support Subject Alt Names on the wildcard certs?

My experience suggests that wildcard certs work, but require a SAN entry for each "real" host because browsers don't trust the CN field anymore. e.g., my *.apps.blah cert doesn't work unless I include all of the things I use it on - homeassistant.apps.blah, nodered.apps.blah, etc.

Do Let's Encrypt certificates have something special that negates this requirement? Or am I completely wrong about the SAN requirement?

  • This sounds like something is broken in your client (or maybe server config)?

    I use Let's Encrypt wildcard certs quite extensively, both in production use at $dayjob and on my home network, and have never encountered anything like this. The only "trick" to wildcard certs is one for .apps.blah won't be valid for apps.blah. The normal way to handle this is request one with SANs .apps.blah and apps.blah.

    Similarly, it won't work for sub1.sub2.apps.blah. I don't run setups like this myself but if you need it I'd recommend using a separate *.sub2.apps.blah for that, mainly due to the potential for DNS issues when LE is validating. Same thing with multiple top-level domains. The reason is when renewing if one of N validations fail, your certificate gets re-issued without the failed domain which then means broken SSL. If you have completely separate certificates and validation of one fails the old (working) version stays in place. With normal renewals happening at 30 days before expiry, this means you have 29 days for this to resolve on its own, manually fix, etc, and LE even emails you a few days before expiry if a certificate hasn't been renewed.

  • Wildcard certs from LE work fine for internal domains. I've been using one for a while now. I had to set up some cron jobs to copy them around and restart some services, but it seems to be working well.