What I'd want is an internal CA, like step-ca, but have the certificates signed by a "real" CA, so I don't have to distribute my own root CA certificate.
The dream would truly be an internal CA backed by a publicly trusted subordinate cert (limited to the domain you control). But afaik that can’t happen until the Name Constraint Extension is enforced by “all” clients.
You really don't actually want this. This intermediate CA would still be subject to the same extensive CAB Forum / vendor root program requirements (audited yearly via WebTrust) as a root CA. There are a ton of requirements, including mandatory response times, that inevitably makes this require a fully staffed team to operate.
That would be a violation of the real CA's duty to only sign certs that they have some basis for believing are correct. (This basis almost always boils down to "controls the DNS".)
Wouldn't that allow you to issue certificates for Google.com? Correct me if I've misunderstood but for the sake of discussion pretend cert pinning doesn't exist, use another example domain if it's easier
I'm not a 100% sure how certificates work. What I imagined would be possible is having a certificate for mydomain.com, which can be used to sign certificates for subdomains.
Which covers using step-ca with Caddy to get TLS certs via ACME for subdomains, and protecting internal services using client certificates/mtls
I then install Tailscale on the host which is running the docker containers, and configure the firewall so that only other 100.* IP addresses can connect to ports 80/443/444. The combination of VPN+MTLS mitigates most of my worries about exposing internal subdomains on public DNS
Yeah, it's probably overkill but I think the multiple layers would help in cases I misconfigured something or if an account someone uses to log into Tailscale was compromised. For example when I ran the containers on a linux host I discovered later docker was bypassing the firewall rules and allowing all connections, but it probably wasn't a big deal because of the MTLS (and the server was behind a NAT router anyway so it was only addressable within the local network)
If you're going to run a serious internal network, you'll need the basic things like NTP, DNS, a CA server, and, yes, some kind of MDM to distribute internal CA certificates to your people. The real PITA is when you don't have these in place.
What I'd want is an internal CA, like step-ca, but have the certificates signed by a "real" CA, so I don't have to distribute my own root CA certificate.
The dream would truly be an internal CA backed by a publicly trusted subordinate cert (limited to the domain you control). But afaik that can’t happen until the Name Constraint Extension is enforced by “all” clients.
> But afaik that can’t happen until the Name Constraint Extension is enforced by “all” clients.
For those curious about this extension, see RFC 5280 § 4.2.1.10:
* https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.10
You really don't actually want this. This intermediate CA would still be subject to the same extensive CAB Forum / vendor root program requirements (audited yearly via WebTrust) as a root CA. There are a ton of requirements, including mandatory response times, that inevitably makes this require a fully staffed team to operate.
That would be a violation of the real CA's duty to only sign certs that they have some basis for believing are correct. (This basis almost always boils down to "controls the DNS".)
Out of curiosity, What's the problem with distributing your own root CAs? Is it security? Or is it "just a PITA"?
Mostly the second.
This would be called an "Intermediate CA" to those for whom this is unclear.
Wouldn't that allow you to issue certificates for Google.com? Correct me if I've misunderstood but for the sake of discussion pretend cert pinning doesn't exist, use another example domain if it's easier
I'm not a 100% sure how certificates work. What I imagined would be possible is having a certificate for mydomain.com, which can be used to sign certificates for subdomains.
1 reply →
Yeah, that is the major drawback.
I've been using it too and it works well, particularly with Caddy to do automatic certificates with ACME where possible
Plus all my services go through Tailscale, so although I am leaking internal hostnames via DNS, all those records point to is 100.* addresses
I'm a fan of both Caddy and Tailscale; any chance you have any devnotes to share on your setup?
My notes were pretty rough but I've tried putting them into a gist here:
https://gist.github.com/mojzu/b093d79e73e7aa302dde8e335945b2...
Which covers using step-ca with Caddy to get TLS certs via ACME for subdomains, and protecting internal services using client certificates/mtls
I then install Tailscale on the host which is running the docker containers, and configure the firewall so that only other 100.* IP addresses can connect to ports 80/443/444. The combination of VPN+MTLS mitigates most of my worries about exposing internal subdomains on public DNS
1 reply →
Tailscale+TLS: isn't it two strong layers of encryption?
Yeah, it's probably overkill but I think the multiple layers would help in cases I misconfigured something or if an account someone uses to log into Tailscale was compromised. For example when I ran the containers on a linux host I discovered later docker was bypassing the firewall rules and allowing all connections, but it probably wasn't a big deal because of the MTLS (and the server was behind a NAT router anyway so it was only addressable within the local network)
This is the correct answer ;)
If you're going to run a serious internal network, you'll need the basic things like NTP, DNS, a CA server, and, yes, some kind of MDM to distribute internal CA certificates to your people. The real PITA is when you don't have these in place.
I've been setting this up for my homelab under .home.arpa, seems to work pretty well so far.