As already noted on this thread, you can't use certbot today to get an IP address certificate. You can use lego [1], but figuring out the exact command line took me some effort yesterday. Here's what worked for me:
lego --domains 206.189.27.68 --accept-tos --http --disable-cn run --profile shortlived
https://github.com/certbot/certbot/pull/10370 showed that a proof of concept is viable with relatively few changes, though it was vibe coded and abandoned (but at least the submitter did so in good faith and collaboratively) :/ Change management and backwards compatibility seem to be the main considerations at the moment.
It allowed me to quickly obtain a couple of IP certificates to test with. I updated my simple TLS certificate checker (https://certcheck.sh) to support checking IP certificates (IPv4 only for now).
IP address certificates are particularly interesting for iOS users who want to run their own DoH servers.
A properly configured DoH server (perhaps running unbound) with a properly constructed configuration profile which included a DoH FQDN with a proper certificate would not work in iOS.
The reason, it turns out, is that iOS insisted that both the FQDN and the IP have proper certificates.
This is why the configuration profiles from big organizations like dns4eu and nextdns would work properly when, for instance, installed on an iphone ... but your own personal DoH server (and profile) would not.
OpenSSL is quite particular about the IP address being included in the SAN field of the cert when making a TLS connection, fwiw. iOS engineers may not have explicitly added this requirement and it might just be a side effect of using a crypto library.
² By the seventh day God had finished the work He had been doing; so on the seventh day He rested from all His work. ³ Then the on-call tech, Lucifer, the Son of Dawn, was awoken at midnight because God did not renew the heavens' and the earths' HTTPS certificate. ⁴ Thusly Lucifer drafted his resignation in a great fury.
The CA/B Forum defines a "short-lived" certificate as 7 days, which has some reduced requirements on revocation that we want. That time, in turn, was chosen based on previous requirements on OCSP responses.
Those are based on a rough idea that responding to any incident (outage, etc) might take a day or two, so (assuming renewal of certificate or OCSP response midway through lifetime) you need at least 2 days for incident response + another day to resign everything, so your lifetime needs to be at least 6 days, and then the requirement is rounded up to another day (to allow the wiggle, as previously mentioned).
Plus, in general, we don't want to align to things like days or weeks or months, or else you can get "resonant frequency" type problems.
We've always struggled with people doing things like renewing on a cronjob at midnight on the 1st monday of the month, which leads to huge traffic surges. I spend more time than I'd like convincing people to update their cronjobs to run at a randomized time.
Next, I hope they focus on issuing certificates for .onion addresses. On the modern web many features and protocols are locked behind HTTPS. The owner of a .onion has a key pair for it, so proving ownership is more trustworthy than even DNS.
For example HTTP/2 and HTTP/3 require HTTPS. While technically HTTPS is redundant, .onion sites should avoid requiring browsers to add special casing for them due to their low popularity compared to regular web sites.
It would give you a certificate chain which may authenticate the onion service as being operated as who it purports to. Of course, depending on context, a certificate that is useful for that purpose might itself be too much if an information leak
Some ACME clients that I think currently support IP addresses are acme.sh, lego, traefik, acmez, caddy, and cert-manager. Certbot support should hopefully land pretty soon.
cert-manager maintainter chiming in to say that yes, cert-manager should support IP address certs - if anyone finds any bugs, we'd love to hear from you!
We also support ACME profiles (required for short lived certs) as of v1.18 which is our oldest currently supported[1] version.
We've got some basic docs[2] available. Profiles are set on a per-issuer basis, so it's easy to have two separate ACME issuers, one issuing longer lived certs and one issuing shorter, allowing for a gradual migration to shorter certs.
I wonder if transport mode IPsec can be relevant again if we're going to have IP address certificates. Ditto RFC 5660 (which -full disclosure- I authored).
Maybe but probably not. Various always-on , SDN, or wide scale site-to-site VPN schemes are deployed widely enough for long enough now that it's expected infrastructure at this point.
Even getting people to use certificates on IPSEC tunnels is a pain. Which reminds me, I think the smallest models of either Palo Alto or Checkpoint still have bizarre authentication failures if the certificate chain is too long, which was always weird to me because the control planes had way more memory than necessary for well over a decade.
You're not thinking creatively enough. I'm only interested in ESP, not IKE. Consider having the TLS handshake negotiate the use of ESP, and when selected the system would plumb ESP for this connection using keys negotiated by TLS (using the exporter). Think ktls/kssl but with ESP. Presto -- no orchestration of IKE credentials, nothing -- it should just work.
It's not. What I have in mind is TLS handshake mediated ESP SA pair keying and policy. Why? Because ESP is much much simpler to implement in silicon than TCP+TLS.
ESP is stateless if using IPv6 (no fragmentation), or even if using IPv4 (fragmented packets -> let the host handle them; PMTUD should mean no need for fragmentation the vast majority of the time). Statelessness makes HW offload easy to implement.
I have now implemented a 2 week renewal interval to test the change to the 45 days, and now they come with a 6-day certificate?
This is no criticism, I like what they do, but how am I supposed to do renewals? If something goes wrong, like the pipeline triggering certbot goes wrong, I won't have time to fix this. So I'd be at a two day renewal with a 4 day "debugging" window.
I'm certain there are some who need this, but it's not me. Also the rationale is a bit odd:
> IP address certificates must be short-lived certificates, a decision we made because IP addresses are more transient than domain names, so validating more frequently is important.
Are IP addresses more transient than a domain within a 45 day window? The static IPs you get when you rent a vps, they're not transient.
> Are IP addresses more transient than a domain within a 45 day window? The static IPs you get when you rent a vps, they're not transient.
They can be as transient as you want. For example, on AWS, you can release an elastic IP any time you want.
So imagine I reserve an elastic IP, then get a 45 day cert for it, then release it immediately. I could repeat this a bunch of times, only renting the IP for a few minutes before releasing it.
I would then have a bunch of 45 day certificates for IP addresses I don't own anymore. Those IP addresses will be assigned to other users, and you could have a cert for someone else's IP.
Of course, there isn't a trivial way to exploit this, but it could still be an issue and defeats the purpose of an IP cert.
The short-lived requirement seems pretty reasonable for IP certs as IP addresses are often rented and may bounce between users quickly. For example if you buy a VM on a cloud provider, as soon as you release that VM or IP it may be given to another customer. Now you have a valid certificate for that IP.
6 days actually seems like a long time for this situation!
The push for shorter and shorter cert lifetimes is a really poor idea, and indicates that the people working on these initiatives have no idea how things are done in the wider world.
These changes are coming from the CAB forum, which includes basically every entity that ships a popular web browser and every entity that ships certificates trusted in those browsers.
There are use cases for certificates that exist outside of that umbrella, but they are by definition niche.
In your answer (and excluding those using ACME): is this a good behavior (that should be kept) or a lame behavior (that we should aim to improve) ?
Shorter and shorter cert lifetime is a good idea because it is the only way to effectively handle a private key leak. Better idea might exist but nobody found one yet
Thing is, NOTHING, is stopping anyone from already getting short lived certs and being 'proactive' and rotating through. What it is saying is, well, we own the process so we'll make Chrome not play ball with your site anymore unless you do as we say...
The CA system has cracks, that short lived certs don't fix, so meanwhile we'll make everyone as uncomfortable as possible while we rearrange deck chairs.
Though if I may put on my tinfoil hat for a moment, I wonder if current algorithms for certificate signing have been broken by some government agency or hacker group and now they're able to generate valid certificates.
But I guess if that were true, then shorter cert lives wouldn't save you.
It's less about IP address transience, and more about IP address control. Rarely does the operator of a website or service control the IP address. It's to limit the CA's risk.
> Are IP addresses more transient than a domain within a 45 day window?
If I don't assign an EIP to my EC2 instance and shut it down, I'm nearly guaranteed to get a different IP when I start it again, even if I start it within seconds of shutdown completing.
It'd be quite a challenge to use this behavior maliciously, though. You'd have to get assigned an IP that someone else was using recently, and the person using that IP would need to have also been using TLS with either an IP address certificate or with certificate verification disabled.
> If something goes wrong, like the pipeline triggering certbot goes wrong, I won't have time to fix this. So I'd be at a two day renewal with a 4 day "debugging" window.
I think a pattern like that is reasonable for a 6-day cert:
- renew every 2 days, and have a "4 day debugging window"
- renew every 1 day, and have a "5 day debugging window"
You should probably be running your renewal pipeline more frequently than that: if you had let your ACME client set itself up on a single server, it would probably run every 12h for a 90-day certificate. The ACME client won't actually give you a new certificate until the old one is old enough to be worth renewing, and you have many more opportunities to notice that the pipeline isn't doing what you expect than if you only run when you expect to receive a new certificate.
If you are doing this in a commercial context and the 4 day debugging window, or any downtime, would cause you more costs than say, buying a 1 year certificate from a commercial supplier, then that might be your answer there...
What worries me more about the push for shorter and shorter cert terms instead of making revoking that works is that if provider fails now you have very little time to switch to new one
Some ACME clients can failover to another provider automatically if the primary one doesn't work, so you wouldn't necessarily need manual intervention on short notice as long as you have the foresight to set up a secondary provider.
IP addresses must be accessible from the internet, so still no way to support TLS for LAN devices without manual setup or angering security researchers.
I recently migrated to a wildcard (*.home.example.com) certificate for all my home network. Works okay for many parts. However requires a public DNS server where TXT records can be set via API (lego supports a few DNS providers out of the box, see https://go-acme.github.io/lego/dns/ )
IPv6? You wouldn’t even need to expose the actual endpoints out on the open internet. DNAT on the edge and point inbound traffic on a VM responsible for cert renewals, then distribute to the LAN devices actually using those addresses.
Also I don't see the point of what TLS is supposed to solve here? If you and I (and everyone else) can legitimately get a certificate for 10.0.0.1, then what are you proving exactly over using a self-signed cert?
There would be no way of determining that I can connecting to my-organisation's 10.0.0.1 and not bad-org's 10.0.0.1.
For ipv6 proof of ownership can easily be done with an outbound connection instead. And would work great for provisioning certs for internal only services.
>so still no way to support TLS for LAN devices without manual setup or angering security researchers.
Arguably setting up letsencrypt is "manual setup". What you can do is run a split-horizon DNS setup inside your LAN on an internet-routable tld, and then run a CA for internal devices. That gives all your internal hosts their own hostname.sub.domain.tld name with HTTPS.
Frankly: it's not that much more work, and it's easier than remembering IP addresses anyway.
idk, the 192.168.0 part has been around since forever. The rest is just a matter of .12 for my laptop, .13 for the one behind the telly, .14 for the pi, etc.
Every time I try to "run a CA", I start splitting hairs.
This is interesting, I am guessing the use case for ip address certs is so your ephemeral services can do TLS communication, but now you don't need to depend on provisioning a record on the name server as well for something that you might be start hundreds or thousands of, that will only last for like an hour or day.
One thing this can be useful for is encrypted client hello (ECH), the way TLS/HTTPS can be used without disclosing the server name to any listening devices (standard SNI names are transmitted in plaintext).
To use it, you need a valid certificate for the connection to the server which has a hostname that does get broadcast in readable form. For companies like Cloudflare, Azure, and Google, this isn't really an issue, because they can just use the name of their proxies.
For smaller sites, often not hosting more than one or two domains, there is hardly a non-distinct hostname available.
With IP certificates, the outer TLS connection can just use the IP address in its readable SNI field and encrypt the actual hostname for the real connection. You no longer need to be a third party proxying other people's content for ECH to have a useful effect.
Even if it did work, the privacy value of hiding the SNI is pretty minimal for an IP address that hosts only a couple domains, as there are plenty of databases that let you look up an IP address to determine what domain names point there - e.g. https://bgp.tools/prefix/18.220.0.0/14#dns
I don't really see the value in ECH for self-hosted sites regardless. It works for Cloudflare and similar because they have millions of unrelated domains behind their IP addresses, so connecting to their IPs reveals essentially nothing, but if your IP is only used for a handful of related things then it's pretty obvious what's going on even if the SNI is obscured.
> In verifying the client-facing server certificate, the client MUST interpret the public name as a DNS-based reference identity [RFC6125]. Clients that incorporate DNS names and IP addresses into the same syntax (e.g. Section 7.4 of [RFC3986] and [WHATWG-IPV4]) MUST reject names that would be interpreted as IPv4 addresses.
Very very true, never thought about orgs like that. However, I don't think someone should use this like a bandaid like that. If the idea is that you want to have a domain associated with a service, then organizationally you probably need to have systems in place to make that easier.
Very excited about this. IP certs solve an annoying bootstrapping problem for selfhosted/indiehosted software, where the software provides a dashboard for you to configure your domain, but you can't securely access the dashboard until you have a cert.
As a concrete example, I'll probably be able to turn off bootstrap domains for TakingNames[0].
I get why Chrome doesn't want it (it doesn't serve Chrome's interests), but that doesn't explain why Let's Encrypt had to remove it. The reason seems to be "you can't be a Chrome CA and not do exactly what Chrome wants, which is... only things Chrome wants to do". In other words, CAs have been entirely captured by Chrome. They're Chrome Authorities.
Am I the only person that thinks this is insane? All web security is now at the whims of Google?
One reason is that the client certificate with id-kp-clientAuth EKU and a dNSName SAN doesn't actually authenticate the client's FQDN. To do that you'd have to do something of a return routability check at the app layer where the server connects to the client by resolving its FQDN to check that it's the same client as on the other connection. I'm not sure how seriously to take that complaint, but it's something.
IP addresses arent valid for the SNI used with ECH, even with TLS.
On paper I do agree though it would be a decent option should things one day change there.
Do I understand correctly: would someone have a concrete example of URL which is both an IP address and HTTPS, widely accessible from global internet?
e.g.
https://<ipv4-address>/ ?
What's stopping you from creating a "localhost.mydomain.com" DNS record that initially resolves to a public IP so you can get a certificate, then copying the certificate locally, then changing the DNS to 127.0.0.1?
>Successful specifications will provide some benefit to all the relevant parties because standards do not represent a zero-sum game. However, there are sometimes situations where there is a conflict between the needs of two (or more) parties.
>In these situations, when one of those parties is an "end user" of the Internet -- for example, a person using a web browser, mail client, or another agent that connects to the Internet -- the Internet Architecture Board argues that the IETF should favor their interests over those of other parties.
No additional risk IMHO. If you can hijack my service IPs, you can establish control over the IPs or the domain names that point to them. (If you can hijack my DNS IPs, you can often do much more... even with DNSSEC, you can keep serving the records that lead to IPs you hijacked)
Something about a 6 day long IP address based token brings me back to the question of why we are wasting so much time on utterly wrong TOFU authorization?
If you are supposed to have an establishable identity I think there is DNSSEC back to the registrar for a name and (I'm not quite sure what?) back to the AS.for the IP.
Then it would be a grave error to issue an IP cert without active insight into BGP. (Or it doesn't matter which chain you have.. But calling a website from a sampling of locations can't be a more correct answer.)
With a 6 day lifetime you'd typically renew after 3 days. If Lets Encrypt is down or refuses to issue then you'd have to choose a different provider. Your browser trusts many different "top of the chain" providers.
With a 30 day cert with renewal 10-15 days in advance that gives you breathing room
Personally I think 3 days is far too short unless you have your automation pulling from two different suppliers.
Thank you, I missed the part with several "top of the chain" providers. So all of them would need to go down at the same time for things to really stop working.
How many "top of chain" providers is letsencrypt using? Are they a single point of failure in that regard?
I'd imagine that other "top of chain" providers want money for their certificates and that they might have a manual process which is slower than letsencrypt?
It's a huge ask, but i'm hoping they'll implement code-signing certs some day, even if they charge for it. It would be nice if appstores then accepted those certs instead of directly requiring developer verification.
1) For better or worse, code signing certificates are expected to come with some degree of organizational verification. No one would trust a domain-validated code signing cert, especially not one which was issued with no human involvement.
2) App stores review apps because they want to verify functionality and compliance with rules, not just as a box-checking exercise. A code signing cert provides no assurances in that regard.
They can just do id verification instead of domain, either in-house or outsource it.
app store review isn't what I was talking about, I meant not having to verify your identity with the appstore, and use your own signing cert which can be used between platforms. Moreover, it would be less costly to develop signed windows apps. It costs several hundred dollars today.
I see how this would be useful once we take binary signing for granted. It would probably even be quite unobjectionable if it were simply a domain binding.
However, the very act of trying to make this system less impractical is a concession in the war on general purpose computing. To subsidize its cost would be to voluntarily loose that non-moral line of argument.
I don't understand where the argument is. Being able to publish content that others can authenticate and then trust sounds like a huge win to me. I don't even see why it has to be restricted to code. It's just verifying who the signer is. More trusted systems and more progress happens when we trust the foundations we're building. I don't think that's a war on general purpose computing. I feel like there is this older way of thinking where insecurity is considered a right of some sort. Being able to do things insecurely should be your right, but being able to reach lots of people and force them to use insecure things sounds exactly like a war on general purpose computing.
I see no problem with outsourcing id verification to a trusted partner. Or they could verify payment by charging you $1 to verify you control the payment card, and combine that with address verification by paper-mailing a verification code.
As already noted on this thread, you can't use certbot today to get an IP address certificate. You can use lego [1], but figuring out the exact command line took me some effort yesterday. Here's what worked for me:
[1] https://go-acme.github.io/lego/
Work for this in Certbot is ongoing here, with some initial work already merged, but much to go. https://github.com/certbot/certbot/issues/10346
https://github.com/certbot/certbot/pull/10370 showed that a proof of concept is viable with relatively few changes, though it was vibe coded and abandoned (but at least the submitter did so in good faith and collaboratively) :/ Change management and backwards compatibility seem to be the main considerations at the moment.
Thank you for posting the lego command!
It allowed me to quickly obtain a couple of IP certificates to test with. I updated my simple TLS certificate checker (https://certcheck.sh) to support checking IP certificates (IPv4 only for now).
I wonder if the support made it to Caddy yet
(seems to be WIP https://github.com/caddyserver/caddy/issues/7399)
It works, but as another comment mentioned there may be quirks with IP certs, specifically IPv6, that I hope will be fixed by v2.11.
IPv4 certs are already working fine for me in Caddy, but I think there's some kinks to work out with IPv6.
Thx!! Love
IP address certificates are particularly interesting for iOS users who want to run their own DoH servers.
A properly configured DoH server (perhaps running unbound) with a properly constructed configuration profile which included a DoH FQDN with a proper certificate would not work in iOS.
The reason, it turns out, is that iOS insisted that both the FQDN and the IP have proper certificates.
This is why the configuration profiles from big organizations like dns4eu and nextdns would work properly when, for instance, installed on an iphone ... but your own personal DoH server (and profile) would not.
OpenSSL is quite particular about the IP address being included in the SAN field of the cert when making a TLS connection, fwiw. iOS engineers may not have explicitly added this requirement and it might just be a side effect of using a crypto library.
I use DoH behind a reverse proxy with my own domain daily without any kind of issue
Why 6 day and not 8?
- 8 is a lucky number and a power of 2
- 8 lets me refresh weekly and have a fixed day of the week to check whether there was some API 429 timeout
- 6 is the value of every digit in the number of the beast
- I just don't like 6!
> 8 lets me refresh weekly and have a fixed day of the week to check whether there was some API 429 timeout
There’s your answer.
6 days means on a long enough enough timeframe the load will end up evenly distributed across a week.
8 days would result in things getting hammered on specific days of the week.
> 6 days means on a long enough enough timeframe the load will end up evenly distributed across a week.
people will put */5 in cron and result will be same, because that's obvious, easy and nice number.
8 replies →
I thought people generally run it daily? It’s a no-op if it doesn’t need renewal.
so now people that want humans around will now renew twice in a week instead of once?
1 reply →
Worry not, cause it's not 6 days (144 hours), it is 6-ish days: 160 hours
And 160 is the sum of the first 11 primes, as well as the sum of the cubes of the first three primes!
Mr Ramanujan, I presume?
2 replies →
Because it allows to you to work for six days, and rest on the seventh. Like God did.
² By the seventh day God had finished the work He had been doing; so on the seventh day He rested from all His work. ³ Then the on-call tech, Lucifer, the Son of Dawn, was awoken at midnight because God did not renew the heavens' and the earths' HTTPS certificate. ⁴ Thusly Lucifer drafted his resignation in a great fury.
6 replies →
I don't think He worked after the 6th day. Went on doing other pet projects
1 reply →
Not my god. My god meant to go into work but got wasted and eventually passed out in the bathtub, fully clothed and holding a bowl of riceroni.
Didn't the Garden of Eden have a pretty massive vulnerability where eating one apple would give you access to all data on good and evil?
1 reply →
It's actually 6 and 2/3rds! I'm trying to figure out a rationale for 160 hours and similarly coming up empty, if anyone knows I'd be interested.
200 would be a nice round number that gets you to 8 1/3 days, so it comes with the benefits of weekly rotation.
I chose 160 hours.
The CA/B Forum defines a "short-lived" certificate as 7 days, which has some reduced requirements on revocation that we want. That time, in turn, was chosen based on previous requirements on OCSP responses.
We chose a value that's under the maximum, which we do in general, to make sure we have some wiggle room. https://bugzilla.mozilla.org/show_bug.cgi?id=1715455 is one example of why.
Those are based on a rough idea that responding to any incident (outage, etc) might take a day or two, so (assuming renewal of certificate or OCSP response midway through lifetime) you need at least 2 days for incident response + another day to resign everything, so your lifetime needs to be at least 6 days, and then the requirement is rounded up to another day (to allow the wiggle, as previously mentioned).
Plus, in general, we don't want to align to things like days or weeks or months, or else you can get "resonant frequency" type problems.
We've always struggled with people doing things like renewing on a cronjob at midnight on the 1st monday of the month, which leads to huge traffic surges. I spend more time than I'd like convincing people to update their cronjobs to run at a randomized time.
2 replies →
It's less than 7 exactly so you cannot set it on a weekly rotation
3 replies →
Six is the smallest perfect number. Perfection is key here.
Why not refresh daily?
The are some great points
Next, I hope they focus on issuing certificates for .onion addresses. On the modern web many features and protocols are locked behind HTTPS. The owner of a .onion has a key pair for it, so proving ownership is more trustworthy than even DNS.
'Automated Certificate Management Environment (ACME) Extensions for ".onion" Special-Use Domain Names'
* https://datatracker.ietf.org/doc/html/rfc9799
* https://acmeforonions.org
* https://onionservices.torproject.org/research/appendixes/acm...
But isn't it unnecessary to use https, since tor itself encrypts and verifies the identity of the endpoint?
For example HTTP/2 and HTTP/3 require HTTPS. While technically HTTPS is redundant, .onion sites should avoid requiring browsers to add special casing for them due to their low popularity compared to regular web sites.
2 replies →
Yes, but browsers moan if you connect to a website without https, no matter if it's on localhost or an onion service.
2 replies →
It would give you a certificate chain which may authenticate the onion service as being operated as who it purports to. Of course, depending on context, a certificate that is useful for that purpose might itself be too much if an information leak
2 replies →
For people who want IP certificates, keep in mind that certbot doesn't support it yet, with a PR still open to implement it: https://github.com/certbot/certbot/pull/10495
I think acme.sh supports it though.
Some ACME clients that I think currently support IP addresses are acme.sh, lego, traefik, acmez, caddy, and cert-manager. Certbot support should hopefully land pretty soon.
cert-manager maintainter chiming in to say that yes, cert-manager should support IP address certs - if anyone finds any bugs, we'd love to hear from you!
We also support ACME profiles (required for short lived certs) as of v1.18 which is our oldest currently supported[1] version.
We've got some basic docs[2] available. Profiles are set on a per-issuer basis, so it's easy to have two separate ACME issuers, one issuing longer lived certs and one issuing shorter, allowing for a gradual migration to shorter certs.
[1]: https://cert-manager.io/docs/releases/ [2]: https://cert-manager.io/docs/configuration/acme/#acme-certif...
I wonder if transport mode IPsec can be relevant again if we're going to have IP address certificates. Ditto RFC 5660 (which -full disclosure- I authored).
Maybe but probably not. Various always-on , SDN, or wide scale site-to-site VPN schemes are deployed widely enough for long enough now that it's expected infrastructure at this point.
Even getting people to use certificates on IPSEC tunnels is a pain. Which reminds me, I think the smallest models of either Palo Alto or Checkpoint still have bizarre authentication failures if the certificate chain is too long, which was always weird to me because the control planes had way more memory than necessary for well over a decade.
You're not thinking creatively enough. I'm only interested in ESP, not IKE. Consider having the TLS handshake negotiate the use of ESP, and when selected the system would plumb ESP for this connection using keys negotiated by TLS (using the exporter). Think ktls/kssl but with ESP. Presto -- no orchestration of IKE credentials, nothing -- it should just work.
The real key is getting ESP HW offload.
2 replies →
Is IPsec still relevant ?
It's not. What I have in mind is TLS handshake mediated ESP SA pair keying and policy. Why? Because ESP is much much simpler to implement in silicon than TCP+TLS.
ESP is stateless if using IPv6 (no fragmentation), or even if using IPv4 (fragmented packets -> let the host handle them; PMTUD should mean no need for fragmentation the vast majority of the time). Statelessness makes HW offload easy to implement.
IPSec is terrible, huge, and messy standard that company that made it took 20 years to stop getting CVE every year
But the very nice thing about ESP (over UDP or not) is that it's much simpler to build HW offload than for TLS.
Using the long ago past as FUD here is not useful.
I have now implemented a 2 week renewal interval to test the change to the 45 days, and now they come with a 6-day certificate?
This is no criticism, I like what they do, but how am I supposed to do renewals? If something goes wrong, like the pipeline triggering certbot goes wrong, I won't have time to fix this. So I'd be at a two day renewal with a 4 day "debugging" window.
I'm certain there are some who need this, but it's not me. Also the rationale is a bit odd:
> IP address certificates must be short-lived certificates, a decision we made because IP addresses are more transient than domain names, so validating more frequently is important.
Are IP addresses more transient than a domain within a 45 day window? The static IPs you get when you rent a vps, they're not transient.
> Are IP addresses more transient than a domain within a 45 day window? The static IPs you get when you rent a vps, they're not transient.
They can be as transient as you want. For example, on AWS, you can release an elastic IP any time you want.
So imagine I reserve an elastic IP, then get a 45 day cert for it, then release it immediately. I could repeat this a bunch of times, only renting the IP for a few minutes before releasing it.
I would then have a bunch of 45 day certificates for IP addresses I don't own anymore. Those IP addresses will be assigned to other users, and you could have a cert for someone else's IP.
Of course, there isn't a trivial way to exploit this, but it could still be an issue and defeats the purpose of an IP cert.
The short-lived requirement seems pretty reasonable for IP certs as IP addresses are often rented and may bounce between users quickly. For example if you buy a VM on a cloud provider, as soon as you release that VM or IP it may be given to another customer. Now you have a valid certificate for that IP.
6 days actually seems like a long time for this situation!
Cloud providers could check the transparency lists, and if there’s a valid cert for the IP, quarantine it until the cert expires. Problem solved.
2 replies →
The push for shorter and shorter cert lifetimes is a really poor idea, and indicates that the people working on these initiatives have no idea how things are done in the wider world.
Which wider world?
These changes are coming from the CAB forum, which includes basically every entity that ships a popular web browser and every entity that ships certificates trusted in those browsers.
There are use cases for certificates that exist outside of that umbrella, but they are by definition niche.
12 replies →
Well they offer a money-back guarantee. And other providers of SSL certificates exist.
6 replies →
How are things done in the wider world ?
In your answer (and excluding those using ACME): is this a good behavior (that should be kept) or a lame behavior (that we should aim to improve) ?
Shorter and shorter cert lifetime is a good idea because it is the only way to effectively handle a private key leak. Better idea might exist but nobody found one yet
Rule by the few, us little people don't matter.
Thing is, NOTHING, is stopping anyone from already getting short lived certs and being 'proactive' and rotating through. What it is saying is, well, we own the process so we'll make Chrome not play ball with your site anymore unless you do as we say...
The CA system has cracks, that short lived certs don't fix, so meanwhile we'll make everyone as uncomfortable as possible while we rearrange deck chairs.
awaiting downvotes in earnest.
At some point it makes sense to just let us use self signed certs. Nobody believes SSL is providing attestation anyways.
6 replies →
It's really security theater, too.
Though if I may put on my tinfoil hat for a moment, I wonder if current algorithms for certificate signing have been broken by some government agency or hacker group and now they're able to generate valid certificates.
But I guess if that were true, then shorter cert lives wouldn't save you.
8 replies →
It's less about IP address transience, and more about IP address control. Rarely does the operator of a website or service control the IP address. It's to limit the CA's risk.
> Are IP addresses more transient than a domain within a 45 day window?
If I don't assign an EIP to my EC2 instance and shut it down, I'm nearly guaranteed to get a different IP when I start it again, even if I start it within seconds of shutdown completing.
It'd be quite a challenge to use this behavior maliciously, though. You'd have to get assigned an IP that someone else was using recently, and the person using that IP would need to have also been using TLS with either an IP address certificate or with certificate verification disabled.
Ok, though if you're in that situation, is an IP cert the correct solution?
1 reply →
> If something goes wrong, like the pipeline triggering certbot goes wrong, I won't have time to fix this. So I'd be at a two day renewal with a 4 day "debugging" window.
I think a pattern like that is reasonable for a 6-day cert:
- renew every 2 days, and have a "4 day debugging window" - renew every 1 day, and have a "5 day debugging window"
Monitoring options: https://letsencrypt.org/docs/monitoring-options/
This makes me wonder if the scripts I published at https://heyoncall.com/blog/barebone-scripts-to-check-ssl-cer... should have the expiry thresholds defined in units of hours, instead of integer days?
You should probably be running your renewal pipeline more frequently than that: if you had let your ACME client set itself up on a single server, it would probably run every 12h for a 90-day certificate. The ACME client won't actually give you a new certificate until the old one is old enough to be worth renewing, and you have many more opportunities to notice that the pipeline isn't doing what you expect than if you only run when you expect to receive a new certificate.
If you are doing this in a commercial context and the 4 day debugging window, or any downtime, would cause you more costs than say, buying a 1 year certificate from a commercial supplier, then that might be your answer there...
There will be no certificates longer than 45 days by any CA in browsers in a few years.
What worries me more about the push for shorter and shorter cert terms instead of making revoking that works is that if provider fails now you have very little time to switch to new one
This is a two-sided solution, and one significant reason for shorter certificate lifetimes helps make revocation work better.
Some ACME clients can failover to another provider automatically if the primary one doesn't work, so you wouldn't necessarily need manual intervention on short notice as long as you have the foresight to set up a secondary provider.
People have tried. Revocation is a very hard problem to solve on this scale.
>I won't have time to fix this
Which should push you to automate the process.
He's expressly talking about broken automation.
3 replies →
IP addresses must be accessible from the internet, so still no way to support TLS for LAN devices without manual setup or angering security researchers.
I recently found this, might help someone here. Genius solution. https://sslip.io/
I recently migrated to a wildcard (*.home.example.com) certificate for all my home network. Works okay for many parts. However requires a public DNS server where TXT records can be set via API (lego supports a few DNS providers out of the box, see https://go-acme.github.io/lego/dns/ )
I use a fairly niche provider (https://go-acme.github.io/lego/dns/zonomi/index.html) and it's supported - I'd go further and say they support most providers
IPv6? You wouldn’t even need to expose the actual endpoints out on the open internet. DNAT on the edge and point inbound traffic on a VM responsible for cert renewals, then distribute to the LAN devices actually using those addresses.
One can also use a private CA for that scenario.
Exactly -- how many 192.168.0.1 certs do you think LetsEncrypt wants to issue?
2 replies →
I mean if it's not routable how do you want to prove ownership in a way nobody else can? Just make a domain name.
Also I don't see the point of what TLS is supposed to solve here? If you and I (and everyone else) can legitimately get a certificate for 10.0.0.1, then what are you proving exactly over using a self-signed cert?
There would be no way of determining that I can connecting to my-organisation's 10.0.0.1 and not bad-org's 10.0.0.1.
5 replies →
For ipv6 proof of ownership can easily be done with an outbound connection instead. And would work great for provisioning certs for internal only services.
>so still no way to support TLS for LAN devices without manual setup or angering security researchers.
Arguably setting up letsencrypt is "manual setup". What you can do is run a split-horizon DNS setup inside your LAN on an internet-routable tld, and then run a CA for internal devices. That gives all your internal hosts their own hostname.sub.domain.tld name with HTTPS.
Frankly: it's not that much more work, and it's easier than remembering IP addresses anyway.
> run a CA
> easier than remembering IP addresses
idk, the 192.168.0 part has been around since forever. The rest is just a matter of .12 for my laptop, .13 for the one behind the telly, .14 for the pi, etc.
Every time I try to "run a CA", I start splitting hairs.
1 reply →
There’s also the DNS-01 challenge that works well for devices on private networks.
What do you mean by 'LAN', everything should be routable globally with IPv6 decade ago anyway /s
This is interesting, I am guessing the use case for ip address certs is so your ephemeral services can do TLS communication, but now you don't need to depend on provisioning a record on the name server as well for something that you might be start hundreds or thousands of, that will only last for like an hour or day.
One thing this can be useful for is encrypted client hello (ECH), the way TLS/HTTPS can be used without disclosing the server name to any listening devices (standard SNI names are transmitted in plaintext).
To use it, you need a valid certificate for the connection to the server which has a hostname that does get broadcast in readable form. For companies like Cloudflare, Azure, and Google, this isn't really an issue, because they can just use the name of their proxies.
For smaller sites, often not hosting more than one or two domains, there is hardly a non-distinct hostname available.
With IP certificates, the outer TLS connection can just use the IP address in its readable SNI field and encrypt the actual hostname for the real connection. You no longer need to be a third party proxying other people's content for ECH to have a useful effect.
That doesn't work, as neither SNI nor the server_name field of the ECHConfig are allowed to contain IP addresses: https://www.ietf.org/archive/id/draft-ietf-tls-esni-25.html#...
Even if it did work, the privacy value of hiding the SNI is pretty minimal for an IP address that hosts only a couple domains, as there are plenty of databases that let you look up an IP address to determine what domain names point there - e.g. https://bgp.tools/prefix/18.220.0.0/14#dns
I don't really see the value in ECH for self-hosted sites regardless. It works for Cloudflare and similar because they have millions of unrelated domains behind their IP addresses, so connecting to their IPs reveals essentially nothing, but if your IP is only used for a handful of related things then it's pretty obvious what's going on even if the SNI is obscured.
As far as I understand you cannot use IP address as the outer certificate as per https://www.ietf.org/archive/id/draft-ietf-tls-esni-25.txt
> In verifying the client-facing server certificate, the client MUST interpret the public name as a DNS-based reference identity [RFC6125]. Clients that incorporate DNS names and IP addresses into the same syntax (e.g. Section 7.4 of [RFC3986] and [WHATWG-IPV4]) MUST reject names that would be interpreted as IPv4 addresses.
The July announcement for IP address certs listed a handful of potential use cases: https://letsencrypt.org/2025/07/01/issuing-our-first-ip-addr...
Thanks! This is helpful to read.
No dependency on a registrar sounds nice. More anonymous.
> No dependency on a registrar sounds nice.
Actually the main benefit is no dependency on DNS (booth direct and root).
IP is a simple primitive, i.e. "is it routable or not ?".
3 replies →
IP addresses also are assigned by registrars (ARIN in the US and Canada, for instance).
4 replies →
Yeah actually seems pretty useful to not rely on the name server for something that isn't human facing.
> I am guessing the use case for ip address certs is so your ephemeral services can do TLS communication
There's also this little thing called DNS over TLS and DNS over HTTPS that you might have heard of ? ;)
I don't quite understand how this relates?
1 reply →
Maybe you want TLS but getting a proper subdomain for your project requires talking to a bunch of people who move slowly?
Very very true, never thought about orgs like that. However, I don't think someone should use this like a bandaid like that. If the idea is that you want to have a domain associated with a service, then organizationally you probably need to have systems in place to make that easier.
2 replies →
Very excited about this. IP certs solve an annoying bootstrapping problem for selfhosted/indiehosted software, where the software provides a dashboard for you to configure your domain, but you can't securely access the dashboard until you have a cert.
As a concrete example, I'll probably be able to turn off bootstrap domains for TakingNames[0].
[0]: https://takingnames.io/blog/instant-subdomains
Has anyone actually given a good explanation as to why TLS Client Auth is being removed?
It's a requirement from the Chrome root program. This page is probably the best resource on why they want this: https://googlechrome.github.io/chromerootprogram/moving-forw...
I get why Chrome doesn't want it (it doesn't serve Chrome's interests), but that doesn't explain why Let's Encrypt had to remove it. The reason seems to be "you can't be a Chrome CA and not do exactly what Chrome wants, which is... only things Chrome wants to do". In other words, CAs have been entirely captured by Chrome. They're Chrome Authorities.
Am I the only person that thinks this is insane? All web security is now at the whims of Google?
3 replies →
One reason is that the client certificate with id-kp-clientAuth EKU and a dNSName SAN doesn't actually authenticate the client's FQDN. To do that you'd have to do something of a return routability check at the app layer where the server connects to the client by resolving its FQDN to check that it's the same client as on the other connection. I'm not sure how seriously to take that complaint, but it's something.
Because Google doesn't want anyone using PKI for anything but simple websites
Because using a public key infrastructure for client certificate is terrible
mTLS is probably the only sane situation where private key infrastructure shall be used
It competes with "Sign in with Google" SSO.
How are IP address certificates useful?
* DoT/DoH
* An outer SNI name when doing ECH perhaps
* Being able to host secure http/mail/etc without being beholden to a domain registrar
To save others a trip to Kagi: DoT / DoH = DNS over TLS [1] / https [2]
E.g.:
[1] https://developers.cloudflare.com/1.1.1.1/encryption/dns-ove...
[2] https://developers.cloudflare.com/1.1.1.1/encryption/dns-ove...
IP addresses arent valid for the SNI used with ECH, even with TLS. On paper I do agree though it would be a decent option should things one day change there.
1 reply →
Oh nice! I hadn't considered DoT/DoH. The ECH angle is interesting. Thanks.
Do I understand correctly: would someone have a concrete example of URL which is both an IP address and HTTPS, widely accessible from global internet? e.g. https://<ipv4-address>/ ?
The websites for DNS servers known by IP? https://1.1.1.1/ presents a valid cert although it redirects.
Out of curiosity, any other example without redirect, in which the URL stays https://<ip> in the browser?
Letsencrypt is my hero
If I can use my DHCP assigned IP, will this allow me to drop having to use self-signed certificates for localhost development?
No, they will only give out certificates if you can prove ownership of the IP, which means it being publicly routable.
Finally a reason to adopt IPv6 for your local development
1 reply →
A lot of publicly routable IP addresses are assigned by DHCP...
It's just control isn't it, not ownership? I can't prove ownership of the IPs assigned to me, but I can prove control.
1 reply →
Sorry, I wasn’t precise enough. I’m at a university and our IP addresses are publicly routable, I think.
Browsers consider ‘localhost’ a secure context without needing https
For local /network/ development, maybe, but you’d probably be doing awkward hairpin natting at your router.
it's nice to be able to use https locally if you're doing things with HTTP/2 specifically.
What's stopping you from creating a "localhost.mydomain.com" DNS record that initially resolves to a public IP so you can get a certificate, then copying the certificate locally, then changing the DNS to 127.0.0.1?
Other than basically being a pain in the ass.
One can also use the DNS-01 challenge in that scenario.
What is a good use case for an IP address certificate for the average company? Say, e-commerce or SaaS-startup?
The Internet is for End Users https://datatracker.ietf.org/doc/html/rfc8890
>Successful specifications will provide some benefit to all the relevant parties because standards do not represent a zero-sum game. However, there are sometimes situations where there is a conflict between the needs of two (or more) parties.
>In these situations, when one of those parties is an "end user" of the Internet -- for example, a person using a web browser, mail client, or another agent that connects to the Internet -- the Internet Architecture Board argues that the IETF should favor their interests over those of other parties.
Incorporated entities are just secondary users.
Can you elaborate on the context of your answer, please? I cannot connect it to anything the original post or I did write.
This comment used to say that was in staging only. (Nevermind, i was confused following the links from original article)
That is a very old article that seems to be outdated now.
I guess IP certs won't really be used for anything important, but isn't there a bigger risk due to BGP hijacking?
No additional risk IMHO. If you can hijack my service IPs, you can establish control over the IPs or the domain names that point to them. (If you can hijack my DNS IPs, you can often do much more... even with DNSSEC, you can keep serving the records that lead to IPs you hijacked)
Does anyone know when Caddy plans on supporting this?
We've supported it for about a year!
Very nice, thank you guys!
https://caddy.community/t/doubt-about-the-new-lets-encrypt-c...
Honestly not a big fan of IP address certs in the context of dynamic IP address generation
Something about a 6 day long IP address based token brings me back to the question of why we are wasting so much time on utterly wrong TOFU authorization?
If you are supposed to have an establishable identity I think there is DNSSEC back to the registrar for a name and (I'm not quite sure what?) back to the AS.for the IP.
Domains map one-to-one with registrars, but multiple AS can be using the same IP address.
Then it would be a grave error to issue an IP cert without active insight into BGP. (Or it doesn't matter which chain you have.. But calling a website from a sampling of locations can't be a more correct answer.)
2 replies →
This sounds like a very good thing, like a lot of stuff coming from letsencrypt.
But what risks are attached with such a short refresh?
Is there someone at the top of the certificate chain who can refuse to give out further certificates within the blink of an eye?
If yes, would this mean that within 6 days all affected certificates would expire, like a very big Denial of Service attack?
And after 6 days everybody goes back to using HTTP?
Maybe someone with more knowledge about certificate chains can explain it to me.
With a 6 day lifetime you'd typically renew after 3 days. If Lets Encrypt is down or refuses to issue then you'd have to choose a different provider. Your browser trusts many different "top of the chain" providers.
With a 30 day cert with renewal 10-15 days in advance that gives you breathing room
Personally I think 3 days is far too short unless you have your automation pulling from two different suppliers.
Thank you, I missed the part with several "top of the chain" providers. So all of them would need to go down at the same time for things to really stop working.
How many "top of chain" providers is letsencrypt using? Are they a single point of failure in that regard?
I'd imagine that other "top of chain" providers want money for their certificates and that they might have a manual process which is slower than letsencrypt?
20 replies →
[dead]
It's a huge ask, but i'm hoping they'll implement code-signing certs some day, even if they charge for it. It would be nice if appstores then accepted those certs instead of directly requiring developer verification.
1) For better or worse, code signing certificates are expected to come with some degree of organizational verification. No one would trust a domain-validated code signing cert, especially not one which was issued with no human involvement.
2) App stores review apps because they want to verify functionality and compliance with rules, not just as a box-checking exercise. A code signing cert provides no assurances in that regard.
They can just do id verification instead of domain, either in-house or outsource it.
app store review isn't what I was talking about, I meant not having to verify your identity with the appstore, and use your own signing cert which can be used between platforms. Moreover, it would be less costly to develop signed windows apps. It costs several hundred dollars today.
2 replies →
I see how this would be useful once we take binary signing for granted. It would probably even be quite unobjectionable if it were simply a domain binding.
However, the very act of trying to make this system less impractical is a concession in the war on general purpose computing. To subsidize its cost would be to voluntarily loose that non-moral line of argument.
I don't understand where the argument is. Being able to publish content that others can authenticate and then trust sounds like a huge win to me. I don't even see why it has to be restricted to code. It's just verifying who the signer is. More trusted systems and more progress happens when we trust the foundations we're building. I don't think that's a war on general purpose computing. I feel like there is this older way of thinking where insecurity is considered a right of some sort. Being able to do things insecurely should be your right, but being able to reach lots of people and force them to use insecure things sounds exactly like a war on general purpose computing.
1 reply →
Would be cool. But since they’re a non-profit, they would need some way to make it scalable.
I see no problem with outsourcing id verification to a trusted partner. Or they could verify payment by charging you $1 to verify you control the payment card, and combine that with address verification by paper-mailing a verification code.