Comment by ahoka
2 years ago
Yes, people do that. After looking at a huge number of incorrect TLS related code and configuration at SO, I’m now pretty sure that most systems run without validating certificates properly.
2 years ago
Yes, people do that. After looking at a huge number of incorrect TLS related code and configuration at SO, I’m now pretty sure that most systems run without validating certificates properly.
This was more true when libraries and tooling defaulted to not checking.
Somewhere in my history is a recent HN (or maybe Reddit) post where somebody insists Curl has been 100% compatible from day one, and like, no, originally curl ignores certificates, today you need to specify that explicitly if it's what you want.
I think (but don't take my word for it) that Requests (the Python library) was the same. Initially it didn't check, then years back the authors were told that if you don't check you get what you didn't pay for (ie nothing) and they changed the defaults.
Python itself is trickier because it was really hard to convince Python people that DNS names, the names we actually care about in certificates, aren't Unicode. I mean, they can be (IDNs), but not in a way that's useful to a machine. If your job is "Present this DNS name to a user" then sure, here's a bunch of tricky and maybe flawed code to best efforts turn the bytes into human Unicode text, but your cert checking code isn't a human, it wants bytes and we deliberately designed the DNS records and the certificate bytes to be identical, so you're just doing a byte-for-byte comparison.
The Python people really wanted to convert everything messily to Unicode, which is - at best if you do it perfectly - slower with the same results and at worst a security hole for no reason.
OpenSSL is at least partly to blame for terrible TLS APIs. OpenSSL is what I call a "stamp collector" library. It wants to collect all the obscure corner cases, because some of its authors are interested. Did the Belgian government standardise a 54-bit cipher called "Bingle Bongle" in 1997? Cool, let's add that to our library. Does anybody use it? No. Should anybody use it? No. But it exists so we added it. A huge waste of everybody's time.
The other reason people don't validate is that it was easier to turn it off and get their work done, which is a big problem that should be addressed systemically rather than by individually telling people "No".
So I'd guess that today out of a thousand pieces of software that ought to do TLS, maybe 750 of them don't validate certificates correctly, and maybe 400 of those deliberately don't do it correctly because the author knew it would fail and had other priorities.
Apache used to not reject SNI hostname headers ending in a dot, in contravention of RFC 6066. Firefox notoriously didn't strip the trailing dot before sending the header. Some versions of curl (or the underlying libraries?) did, some didn't. I filed a bug at bz.apache.org about it.
requests pulls in certifi (Firefox's trust store, repackaged) via urllib3, so it probably uses those root certs by default, not the system store.
To be fair that might be partly the fault of TLS libraries. There should be a single sane function that does the least surprising thing and then lower level APIs for everything else. Currently you need a checklist of things that must be checked before trusting a connection.