← Back to context

Comment by ekr____

5 months ago

Moreover, there's not really much in the way of choices here. If you don't have this kind of automatic version negotiation then it's essentially impossible to deploy a new version.

Well you can, but that would require a higher level of political skill than normally exists for such things. What would have to happen is that almost everyone would have to agree on the new version and then implement it. Once implementation was sufficiently high enough then you have a switchover day.

The big risk with such an approach is that you could implement something, then the politics could fail and you would end up with nothing.

The big downside of negotiation is that no one ever has to commit to anything so everything is possible. In the case of TLS, that seems to have led to endless bikeshedding which has created a standard which has so many options is is hardly a standard anymore. The only part that has to be truly standard is the negotiation scheme.

  • This seems like a truly unreasonable level of political skill for nearly any setting. We're talking about changing every endpoint in the Internet, including those which can no longer be upgraded. I struggle to think of any entity or set of entities which could plausibly do that.

    Moreover, even in the best case scenario this means that you don't get the benefits of deployment for years if not decades. Even 7 years out, TLS 1.3 is well below 100% deployment. To take a specific example here: we want to deploy PQ ciphers ASAP to prevent harvest-and-decrypt attacks. Why should this wait for 100% deployment?

    > The big downside of negotiation is that no one ever has to commit to anything so everything is possible. In the case of TLS, that seems to have led to endless bikeshedding which has created a standard which has so many options is is hardly a standard anymore. The only part that has to be truly standard is the negotiation scheme.

    I don't think this is really that accurate, especially on the Web. The actual widely in use options are fairly narrow.

    TLS is used in a lot of different settings, so it's unsurprising that there are a lot of options to cover those settings. TLS 1.3 did manage to reduce those quite a bit, however.

    • > This seems like a truly unreasonable level of political skill for nearly any setting. We're talking about changing every endpoint in the Internet, including those which can no longer be upgraded. I struggle to think of any entity or set of entities which could plausibly do that.

      Case in point: IPv6 adoption. There's no interoperability or negotiation between it and IPv4 (at least, not in any way that matters), which has led to the mess we're in today.

      3 replies →

  • That’s a great theory but in practice such a “flag day” almost never happens. The last time the internet went through such a change was January 1, 1983, when the ARPANET switched from NCP to the newly designed TCP/IP. People want to do something similar on February 1, 2030, to remove IPv4 and switch totally to IPv6, but I give it a 50/50 chance of success, and IPv6 is already about 30 years old. See https://ipv4flagday.net/

    • You don't have to have everyone switch over on the same day as with your example. Once it is decreed that implementations are widespread enough, then everyone can switch over to the introduced thing gradually. The "flag day" is when it is decreed that implementations no longer have to support some previously widely used method. Support for that method would then gradually disappear unless there was some associated cryptographic emergency that could not be dealt with without changing the standard.

      1 reply →

  • > The big risk with such an approach is that you could implement something, then the politics could fail and you would end up with nothing.

    They learned the lesson of IPv6 here.

  • > that seems to have led to endless bikeshedding which has created a standard which has so many options is is hardly a standard anymore

    Part of the motivation of TLS 1.3 was to mitigate that. It removed a lot of options for negotiating the ciphersuite.

You could deploy a new version, you'd just have older clients unable to connect to servers implementing the newer versions.

It wouldn't have been insane to rename https to httpt or something after TLS 1.2 and screw backwards compatibility (yes I realize the 's' stands for secure, not 'ssl', but httpt would have still worked as "HTTP with TLS")

  • > It wouldn't have been insane to rename https to httpt or something after TLS 1.2 and screw backwards compatibility

    That would have been at least little bit insane, since then web links would be embedding the protocol version number. As a result, we'd need to keep old versions of TLS around indefinitely to make sure old URLs still work.

    I wish we could go the other way - and make http:// implicitly use TLS when TLS is available. Having http://.../x and https://.../x be able to resolve to different resources was a huge mistake.

    • Regarding your last paragraph: Isn’t that pretty much solved thanks to HSTS preload? A non-technical author of a small recipe blog might not know how to set it up, but a bank ought to have staff (and auditors) who takes care of stuff like that.

      4 replies →

    • > As a result, we'd need to keep old versions of TLS around indefinitely to make sure old URLs still work

      Wouldn't we be able to just redirect https->httpt like http requests do right now?

      Sure it'd be a tiny bit more overhead for servers, but no different than what we already experienced moving away from unencrypted http

      2 replies →

  • This has a number of negative downstream effects.

    First, recall that links are very often inter-site, so the consequence would be that even when a server upgraded to TLS 1.2, clients would still try to connect with TLS 1.1 because they were using the wrong kind of link. This would relay delay deployment. By contrast, today when the server upgrades then new clients upgrade as well.

    Second, in the Web security model, the Origin of a resource (e.g., the context in which the JS runs) is based on scheme/host/port. So httpt would be a different origin from HTTPS. Consider what happens if the incoming link is https and internal links are httpt now different pages are different origins for the same site.

    These considerations are so important that when QUIC was developed, the IETF decided that QUIC would also be an https URL (it helps that IETF QUIC's cryptographic handshake is TLS 1.3).

  • TLS is one of the best success stories of widely applied security with great UX. It would be nowhere as successful with that attitude.

Depends on what you mean by "this kind" because you want a way to detect attacker-forced downgrades and that used to be missing.