← Back to context

Comment by schoen

11 years ago

Four things:

(1) You can do the attack you describe today with existing CAs that are issuing DV certs because posting a file on the web server is an existing DV validation method that's in routine use.

(2) There is another validation method we've developed called dvsni which is stronger in some respects (but yes, it still trusts DNS).

(3) We're expecting to do multipath testing of the proof of site ownership to make MITM attacks harder. (But as with much existing DV in general, someone who can completely compromise DNS can cause misissuance.)

(4) If the community finds solutions that make any step of this process stronger, Let's Encrypt will presumably adopt them.

Let's Encrypt can run a web spider - crawl the web to build a database of actively used domain names.

Periodically poll DNS for the list from that database to obtain the NS records for pretty much all of the web, and also A records for all the actively used hosts you find in the crawl. Keep this cache as a trace of how DNS records change.

Now, do the DNS polling from several different geographic locations. Now you've got a history of DNS from different viewpoints.

When you get a request for a certificate for, say, "microsoft.com", look up the domain name in the way described on the Lets Encrypt description. But also check that this IP address appears in the history, either from multiple locations for a few days, or from one location for a few months.

If this test fails, check if the historic IP addresses for this domain from the polled cache are already running TLS, signed by a regular CA. If so, reject the application.

Otherwise continue with validation in the way described on the Lets Encrypt web page.

  • Thanks for the interesting suggestion -- I'll mention it to the people working on the CA implementation as a possible validation technique to consider.

agree completely and it's worth noting that i don't have a solution to the issues i mentioned, either.

leveraging other (potentially-insecure) paths to establish trust might help further enhance confidence in authenticity; e.g. verification using something like the broad-based strategy of moxie's perspectives (except via plaintext) or maybe through additional verification of plaintext on the site as fetched via tor or retrieving a cached copy of the site securely from the internet archive or search engines.

dvsni and multipath testing sound quite interesting, and i think defense in depth is the right approach.

having been at akamai's recent edge conference, i didn't hear much from them on this. does anyone have any additional details of their interest in the project?