Comment by superkuh

17 days ago

Wow! Does this mean that Firefox can re-enable self-signed certs for it's HTTP/3 stack since it's using a custom implementation and not someone elses big QUIC lib and default build flags anymore? That'd be a huge win for human people and their typical LAN use cases. Even if the corporate use cases don't want it for 'security' reasons.

You can still have self-signed certs, you just have to actually set up your own CA and import it as trusted in the relevant trust store so it can be verified.

You can't just have some random router, printer, NAS, etc. generate its own cert out of thin air and tell the browser to ignore the fact that it can't be verified.

IMO this is a good thing. The way browsers handle HTTPS on older protocols is a result of the number of legacy badly configured systems there are out there which browser vendors don't want to break. Anywhere someone's supporting HTTP/3 they're doing something new, so enforcing a "do it right or don't do it at all" policy is possible.

  • Which also means it's impossible to host a visitable webserver for random people on HTTP/3 without the continued permission of a third party corporation. Do it "right" means "Do it for the corps' use cases only" to most people it seems.

    • I'm not sure what you're trying to say here. Your random self-signed cert never worked with HTTPS v1.x-2.x either, and never served a real purpose unless the client had explicitly trusted your cert.

      HTTP/3 just removes the space for misunderstanding.

      1 reply →

Certificate verification in Firefox happens at a layer way above HTTP and TLS (for those who care, it's in PSM), so which QUIC library is used is basically not relevant.

The reason that Firefox -- and other major browsers -- make self-signed certs so difficult to use is that allowing users to override certificate checks weakens the security of HTTPS, which otherwise relies on certificates being verifiable against the trust anchor list. It's true that this makes certain cases harder, but the judgement of the browser community was that that wasn't worth the security tradeoff. In other words, it's a policy decision, not a technical one.

  • It's a pretty bad one, though. It massively undermines the security of connections to local devices for a slight improvement in security on the open internet. It's very frustrating how browser vendors don't even seem to consider it something worth solving, even if e.g. the way it is presented to the user is different. At the moment if you just use plain HTTP then things do mostly work (apart from some APIs which are somewhat arbitrarily locked to 'secure contexts' which means very little about the trustworthiness of the code that does or does not have access to those APIs), but if you try to use HTTPs then you get a million 'this is really inesecure' warnings. There's no 'use HTTPs but treat it like HTTP' option.

    • Either you really are secure, or ideally you should not be able to even pretend you are secure. Allowing "pretend it's secure" downgrades the security in all contexts.

      IMHO they should gradually lock all dynamic code execution such as dynamic CSS and javascript behind a explicit toggle for insecure http sites.

      > It massively undermines the security of connections to local devices

      No, you see the prompt, it is insecure. If the network admin wants it secure, it means either a internal CA, or a literally free cert from let's encrypt. As the network admin did not care, it's insecure.

      "but I have legacy garbage with hardcoded self-signed certs" then reverse proxy that legacy garbage with Caddy?

      3 replies →

    • I don't think it's correct to say that browser vendors don't think it's worth solving. For instance, Martin Thomson from Mozilla has done some thinking about it. https://docs.google.com/document/u/0/d/170rFC91jqvpFrKIqG4K8....

      However, it's not an entirely trivial problem to get it right, especially because how how deeply the scheme is tied into the Web security model. Your example here is a good one of what I'm talking about:

      > At the moment if you just use plain HTTP then things do mostly work (apart from some APIs which are somewhat arbitrarily locked to 'secure contexts' which means very little about the trustworthiness of the code that does or does not have access to those APIs),

      You're right that being served over HTTPS doesn't make the site trustworthy, but what it does do is provide integrity for the identity of the server. So, for instance, the user might look at the URL and decide that the server is trustworthy and can be allowed to use the camera or microphone. However, if you use HTTPS but without verifying the certificate, then an attacker might in the future substitute themselves and take advantage of that camera and microphone access. Another example is when the user enters their password.

      Rather than saying that browser vendors don't think this is worth solving in the abstract I would say that it's not very high on the priority list, especially because most of the ideas people have proposed don't work very well.

    • I'm pretty sure private PKIs are an option that is pretty straightforward to use.

      Security is still a lot better because the root is communicated out of band.

I think self-signed certs should be possible on principal, but is there a reason to use HTTP/3 on LAN use cases? In low-latency situations, there's barely any advantage to using HTTP3 over http/2, and even HTTP 1.1 is good enough for most use cases (and will outperform the other options in terms of pure throughput).