This certificate industry has been such a racket. It's not even tacit that there are two completely separate issues that certificates and encryption solve. They get conflated and non technical users rightly get confused about which thing is trying to solve a problem they aren't sure why they have.
The certificate authorities are quite in love that the self-signed certificate errors are turning redder, bolder, and bigger. A self signed certificate warning means "Warning! The admin on the site you're connecting to wants this conversation to be private but it hasn't been proven that he has 200 bucks for us to say he's cool".
But so what if he's cool? Yeah I like my banking website to be "cool" but for 200 bucks I can be just as "cool". A few years back the browsers started putting extra bling on the URL bar if the coolness factor was high enough - if a bank pays 10,000 bucks for a really cool verification, they get a giant green pulsating URL badge. And they should, that means someone had to fax over vials of blood with the governor's seal that it's a legitimate institute in that state or province. But my little 200 dollar, not pulsating but still green certificate means "yeah digitalsushi definitely had 200 bucks and a fax machine, or at least was hostmaster@digitalsushi.com for damned sure".
And that is good enough for users. No errors? It's legit.
What's the difference between me coughing up 200 bucks to make that URL bar green, and then bright red with klaxons cause I didn't cough up the 200 bucks to be sure I am the owner of a personal domain? Like I said, a racket. The certificate authorities love causing a panic. But don't tell me users are any safer just 'cause I had 200 bucks. They're not.
The cert is just for warm and fuzzies. The encryption is to keep snoops out. If I made a browser, I would have 200 dollar "hostmaster" verification be some orange, cautious URL bar - "this person has a site that we have verified to the laziest extent possible without getting sued for not even doing anything at all". But then I probably wouldn't be getting any tips in my jar from the CAs at the end of the day.
> A self signed certificate warning means "Warning! The admin on the site you're connecting to wants this conversation to be private but it hasn't been proven that he has 200 bucks for us to say he's cool"
no. It means "even though this connection is encrypted, there is no way to tell you whether you are currently talking to that site or to NSA which is forwarding all of your traffic to the site you're on".
Treating this as a grave error IMHO is right because by accepting the connection over SSL, you state that the conversation between the user agent and the server is meant to be private.
Unfortunately, there is no way to guarantee that to be true if the identity of the server certificate can't somehow be tied to the identity of the server.
So when you accept the connection unencrypted, you tell the user agent "hey - everything is ok here - I don't care about this conversation to be private", so no error message is shown.
But the moment you accept the connection over ssl, the user agent assumes the connection to be intended to be private and failure to assert identity becomes a terminal issue.
This doesn't mean that the CA way of doing things is the right way - far from it. It's just the best that we currently have.
The solution is absolutely not to have browsers accept self-signed certificates though. The solution is something nobody hasn't quite come up with.
The solution is something nobody hasn't quite come up with.
SSH has. It tells me:
WARNING, You are connecting to this site (fi:ng:er:pr:in:t) for the first time. Do your homework now. IF you deem it trustworthy right now then I will never bother you again UNLESS someone tries to impersonate it in the future.
That model isn't perfect either but it is much preferable over the model that we currently have, which is: Blindly trust everyone who manages to exert control over any one of the 200+ "Certificate Authorities" that someone chose to bake into my browser.
No, I wouldn't say so. Having SSL is better than having nothing pretty much on any site. But if you don't want to pay $200 somebody for nothing, you would probably consider using http by default on your site, because it just looks "safer" to the user that knows nothing about cryptography because of how browsers behave. Which is nonsense. It's worse than nothing.
And CA are not "authorities" at all. They could lie to you, they could be compromised. Of course, the fact that this certificate has been confirmed by "somebody" makes it a little more reliable than if it never was confirmed by anyone at all, but these "somebodies", CA, don't have any control over the situation, it's just some guys that came up with idea to make money like that early enough. You are as good CA as Symantec is, you can just start selling certificates and it would be the same — except, well, you are just some guy, so browsers wouldn't accept these certificates so it's worth nothing. It's all just about trust, and I'm not sure I trust Symantec more than I trust you. (And I don't mean I actually trust you, by the way.)
For everyone else it's not really about SSL, security and CAs, it's just about how popular browsers behave.
So, no, monopolies existing for the reason they are allowed to do something are never good. Only if they do it for free.
There's no question in my mind that the whole thing is a racket and militates against security (you generally don't even know all the evil organisations that your browser implicitly trusts - and all the organisations that they trust etc).
There are certainly other options too: here's my suggestion-
The first time you go to a site where the certificate is one you haven't seen before, the browser should show a nice friendly page that doesn't make a fuss about how dangerous it is, and shows a fingerprint image for the site that you can verify elsewhere, either from a mail you've been sent, and with a list of images from fingerprint servers it knows about that contain a record for that site shown next to it.
Once you accept, it should store that certificate and allow you access to that site without making a big fuss or making it look like it's less secure than an unencrypted site. This should be a relatively normal flow and we should make the user experience accessible to normal people.
It's basically what we do for ssh connections to new hosts.
"Treating this as a grave error IMHO is right because by accepting the connection over SSL, you state that the conversation between the user agent and the server is meant to be private."
This is misguided thinking, pure and simple. Because of this line of thinking, your everyday webmaster has been convinced that encrypting data on a regular basis is more trouble than it's worth and allowed NSA (or the Chinese or the Iranian or what have you) authorities to simply put in a tap to slurp the entire internet without even going through the trouble of targeting and impersonating. Basically, this is the thinking that has enabled dragnet surveillance of the internet with such ease.
> no. It means "even though this connection is encrypted, there is no way to tell you whether you are currently talking to that site or to NSA which is forwarding all of your traffic to the site you're on".
That would be correct if you could assume that the NSA couldn't fake certificates for websites. But it can, so it's wrong and misleading. It's certificate pinning, notary systems etc. that actually give some credibility to the certificate you're currently using, not whatever the browsers indicate as default.
FWIW, (valid) rogue certificates have been found in the wild several times, CAs have been compromised etc. ...
Browsers shouldn't silently accept self-signed, but there is a class of servers where self-signed is the best we've got: connecting to embedded devices. If I want to talk to the new printer or fridge I got over the web, they have no way of establishing trust besides Tacking my first request to them.
>So when you accept the connection unencrypted, you tell the user agent "hey - everything is ok here - I don't care about this conversation to be private", so no error message is shown.
Maybe a security-conscious person thinks that, but the typical user does not knowingly choose http over https, and thus the danger of MitM and (unaccepted) snooping is at least as large for the former.
So it's somewhat debatable why we'd warn users that "hey, someone might be reading this and impersonating the site" for self-signed https but not http.
The use case for the CA system is to prevent conventional criminal activity -- not state-level spying or lawful intercept. The $200 is just a paper trail that links the website to either a purchase transaction or some sort of communication detail.
The self-signed cert risk has nothing to do with the NSA... if it's your cert or a known cert, you add it to the trust store, otherwise, you don't.
Private to the NSA and reasonably private to the person sitting next to you are different use cases. The current model is "I'm sorry, we can't make this secure against the NSA and professional burglars so we're going to make it difficult to be reasonably private to others on the network".
It's as if a building manager, scared that small amounts of sound can leak through a door, decided that the only solution is to nail all the office doors open and require you to sign a form in triplicate that you are aware the door is not completely soundproof before you are allowed to close it to make a phone call. (Or jump through a long registration process to have someone come and install a heavy steel soundproofed door which will require replacement every 12 months.)
After all, if you're closing the door, it's clearly meant to be private. And if we can't guarantee complete security against sound leaks to people holding their ear to a glass on the other side, surely you mustn't be allowed to have a door.
Self-signed certificates are still better than http plain text. I understand not showing the padlock icon for self-signed certificates, I don't understand why you would warn people away from them when the worst case is that they are just as unsafe as when they use plain http. IMHO this browser behavior is completely nonsensical.
Here's one thing that's NOT the solution: throwing out all encryption entirely. Secure vs insecurse is a gradient. The information that you're now talking to the same entity as you were when you first viewed the site is valuable. For example it means that you can be sure that you're talking to the real site when you log in to it on a public wifi, provided you have visited that site before. In fact, I trust a site that's still the same entity as when I first visited it a whole lot more than a site with a new certificate signed by some random CA. In practice the security added by CAs is negligible, so it makes no sense to disable/enable encryption based on that.
Certificates don't even solve the problem they attempt to solve, because in practices there are too many weaknesses in the chain. When you first downloaded firefox/chrome, who knows that the NSA didn't tamper with the CA list? (not that they'd need to)
Moxie Marlinspike's Perspectives addon for Firefox was a good attempt to resolve some of the problems with self-signed certs.
Unfortunately, no browsers adopted the project, and it is no longer compatible with Firefox. There are a couple forks which are still in development, but they are pretty underdeveloped.
I wonder if Mozilla would be more likely to accept this kind of project into Firefox today, compared to ~4 years ago when it was first released, now that privacy and security may be more important topic to the users of the browser.
The solution, at least for something decentralized, seems to be a web of trust established by multiple other identities signing your public key with some assumption of assurance that they have a reasonable belief that your actual identity is in fact represented by that public key.
That's what PGP/GPG people seem to do, anyway.
Why can't I get my personally-generated cert signed by X other people who vouch for its authenticity?
> no. It means "even though this connection is encrypted, there is no way to tell you whether you are currently talking to that site or to NSA which is forwarding all of your traffic to the site you're on".
Well... that's true regardless, as the NSA almost certainly has control over one or more certificate authorities.
It's interesting that your boogeyman in the NSA and not scammers. I think scammers are 1000X more likely. Escpecially since the NSA can just see the decrypted traffic from behind the firewall. There's no technology solution for voluntarily leaving the backdoor open.
Nah. The NSA, or any adversary remotely approaching them in resources, has the ability to generate certificates that are on your browser's trust chain. Self-signed and unknown-CA warnings suggest that a much lower level attacker may be interfering.
> The solution is absolutely not to have browsers accept self-signed certificates though. The solution is something nobody hasn't quite come up with.
We do have a solution that does accept self-signed certificates. The remaining pieces need to be finished and the players need to come together though:
Let's Encrypt seems like the right "next step", but we still need to address the man-in-the-middle problem with HTTPS, and that is something the blockchain will solve.
I totally agree that CAs are a racket. There's zero competition in that market and the gate-keepers (Microsoft, Mozilla, Apple, and Google) keep it that way (mostly Microsoft however).
That being said: Identity verification is important as the encryption is worthless if you can be trivially man-in-the-middled. All encryption assures is that two end points can only read communications between one another, it makes no assurances that the two end points are who they claim to be.
So verification is a legitimate requirement and it does have a legitimate cost. The problem is the LOWEST barriers to entry are set too high, this has become a particular problem when insecure WiFi is so common and even "basic" web-sites really need HTTPS (e.g. this one).
HTTP can be man-in-the-middled passively, and without detection; making dragnets super easy.
In order for HTTPS self signed certs to be effectively man-in-the-middled the attacker needs to be careful to only selectively MITM because if the attacker does it indiscriminately clients can record what public key was used. The content provider can have a process that sits on top of a VPN / Tor that periodically requests a resource from the server and if it detects that the service is being MITM then it can shut down the service and a certificate authority can be brought in.
Edit: Also, all this BS about how HTTPS implies security is besides the grandparent's point: certificates and encryption are currently conflated to the great detriment of security, and they need not be.
That's the standard motivation for CAs, but I don't buy it.
Most of the time, I'm much more interested in a domain identity than a corporate identity. If I go to bigbank.com, and is presented with a certificate, I want to know if I am talking to bigbank.com -- not that I'm talking to "Big Bank Co." (or at least one of the legal entities around the world under that name).
Therefore it would make much more sense if your TLD made a cryptographic assertment that you are the legal owner of a domain and that this information could be utilized up the whole protocol stack.
That would not have a legitimate cost, apart from the domain name system itself.
Without some kind of authentication, the encryption TLS offers provides no meaningful security. It might as well be an elaborate compression scheme. The only "security" derived from unauthenticated TLS presumes that attackers can't see the first few packets of a session. But of course, real attackers trivially see all the the traffic for a session, because they snare attackers with routing, DNS, and layer 2 redirection.
What's especially baffling about self-signed certificate advocacy is the implied threat model. Low- and mid-level network attackers and crime syndicates can't compromise a CA. Every nation state can, of course (so long as the site in question isn't public-key-pinned). But nation states are also uniquely capable of MITMing connections!
>The only "security" derived from unauthenticated TLS presumes that attackers can't see the first few packets of a session
Could you elaborate here? With a self-signed cert, the server is still not sending secret information in the first few packets; it just tells you (without authentication) which public key to use to encrypt the later packets (well, the public key to encrypt the private key for later encryption).
The threat model would be eavesdroppers who can't control the channel, only look. Using the SS cert would be better than an unencrypted connection, though still shouldn't be represented as being as secure as full TLS. As it stands, the server is either forced to wait to get the cert, or serve unencrypted such that all attackers can see.
I'm not entirely sure I understand your point, so if I misunderstood you please correct me.
First, TLS has three principles that, if you lose one, it becomes essentially uselsss:
1) Authentication - you're talking to the right server
2) Encryption - nobody saw what was sent
3) Verification - nothing was modified in transit
Without authentication, you essentially are not protected against anything. Any router, any government can generate a cert for any server or hostname.
Perhaps you don't think EV certs have a purpose - personally, I think they're helpful to ensure that even if someone hijacks a domain they cannot issue an EV cert. Luckily, the cost of certificates is going down over time (usually you can get the certs you mentioned at $10/$150). That's what my startup (https://certly.io) is trying to help people get, cheap and trusted certificates (sorry for the promotion here)
The warning pages are really ridiculous. Why doesn't every HTTP page show a warning you have to click through?
But it's not like MITM attacks are not real. CAs don't realistically do a thing about them, but it is true that you can't trust that your connection is private based on TLS alone. (unless you're doing certificate pinning or you have some other solution).
You're absolutely right. From first principles, HTTP should have a louder warning than self-signed HTTPS.
Our hope is that Let's Encrypt will reduce the barriers to CA-signed HTTPS sufficiently, that it will become realistic for browsers to show warning indicators on HTTP.
If they did that today, millions of sites would complain, "why are you forcing us to pay money to CAs, and deal with the incredible headache of cert installation and management?". With Let's Encrypt, the browsers can point to a simple, single-command solution.
Eventually maybe the browsers will do that. Currently far too many websites are HTTP-only to allow for that behavior, but if that changes and the vast majority of the web is over SSL it would make sense to start warning for HTTP connections. That would further reduce the practicality of SSL stripping attacks.
It's not enough to keep the snoops out - you need to KNOW you're keeping the snoops out. That's what SSL helps with. A certificate is just a key issued by a public (aka trusted) authority. Sites can also choose to verify the certificate: if this is done, even if a 3rd party can procure a fake cert, if they don't have the same cert the web server uses, they can't snoop the traffic.
Site: Here's my public key. Use it to verify that anything I sent you came from me. But don't take my word for it, verify it against a set of trusted authorities pre-installed on your machine.
Browser: Ok, your cert checks out. Here's my public key. You can use it for the same.
Site: Ok, now I need you to reply this message with the entire certificate chain you have for me to make sure a 3rd party didn't install a root cert and inject keys between us. Encrypt it with both your private key and my public key.
Browser: Ok, here it is: ASDSDFDFSDFDSFSD.
Site: That checks out. Ok, now you can talk to me.
This is what certificates help with. There are verification standards that apply, and all the certificate authorities have to agree to follow these standards when issuing certain types of SSL certificates. The most stringent, the "Green bar" with the entity name, often require verification through multiple means, including bank accounts. Certificate authorities that fail to verify properly can have their issuing privileges revoked (though this is hard to do in practice, it can be done).
Here's some comparison screenshots of the "bling" that is being described (hard to even tell that some of these sites are SSL'd without getting the EV)
I'm pissed off 'cos I'm on the board for rationalwiki.org and we have to pay a friggin' fortune to get the shiny green address bar ... because end users actually care, even as we know precisely what snake oil the whole SSL racket is. Gah.
I'm all for CAs to burn in a special hell. The other cost, though, was always getting a unique IP. Is that still a thing? Has someone figured out multiple certificates for different domains on the same IP? Weren't we running out of IPv4 at some point?
The thing is, without a chain of trust, the self-signed certificate might be from you or it might be from the "snoops" themselves. Certificates that don't contain any identifying information are vulnerable to man-in-the-middle attacks.
They might as well say something even more ominous: "If your certificate expires, your site will no longer be accessible."
Of course, we know that's not true either, but try explaining to your visitors how to bypass the security warning (newer browsers sure don't make it obvious, even if you know to look for it).
There are trusted free certificates as well, like the ones from StartSSL.
> if a bank pays 10,000 bucks for a really cool verification, they get a giant green pulsating URL badge
Yeah, $10 000 and legal documentation proving that they are exactly the same legal entity as the one stated on the certificated. All verified by a provider that's been deemed trustworthy by your browser's developers.
Finally, if a certificate is self-signed, it generally should be a large warning to most users: the certificate was made by an unknown entity, and anybody may be intercepting the comunication. Power-users understand when self-signed CAs are used, but they don't get scared of red warnings either, so that's not an issue.
This certificate industry has been such a racket. It's not even tacit that there are two completely separate issues that certificates and encryption solve. They get conflated and non technical users rightly get confused about which thing is trying to solve a problem they aren't sure why they have.
But a man-in-the-middle attack will remove any secrecy encryption provides and to prevent that, we require certificate authorities to perform some minimal checks that public keys delivered to your browser are indeed the correct ones.
You've got a point about how warnings are pushing incentives towards more verification, but they serve a purpose that aligns with secrecy of communication.
Wasn't WOT (Web Of Trust) supposed to fix this? Basically, I get other people to sign my public key asserting that it's actually me and not someone else, and if enough people do that it's considered "trusted", but in a decentralized fashion that's not tied to "authorities"?
Perhaps you should understand a system before slandering it? As others have said, encryption without authentication is useless.
Running a CA has an associated cost, including maintenance, security, etc. That's what you pay for when you acquire a certificate. Whether current market prices' markup is too high would be a different question, but paying for a certificate is definitely not spending 200$ to look cool.
CAs are the best known way (at the moment) to authenticate through insecure channels (before anyone brings pìnned certs or WoT, read this comment of mine: https://news.ycombinator.com/item?id=8616766)
EDIT: You can downvote all you want but I'm still right. Excuse my tone, but slandering a system without an intimate understanding of the "how"s and the "why"s (i.e. spreading FUD) hurts everyone in the long run.
That's the third comment of yours in which I've seen you taunt downvoters via edits in this thread alone. That's why I'm downvoting you. Knock it off, please.
This is awesome! It looks like what CACert.org set out to be, except this time instead of developing the CA first and then seeking certification (which has been a problem due to the insanely expensive audit process), but the EFF got the vendors on board first and then started doing the nuts and bolts.
This is huge if it takes off. The CA PKI will no longer be a scam anymore!!
I'd trust the EFF/Mozilla over a random for profit "security corporation" like VeriSign any day of the week and twice on Sunday to be good stewards of the infrastructure.
I don't see how this actually keeps the CA PKI from being a scam. While I personally trust the EFF & Mozilla right now, as long as I can't meaningfully revoke that trust, it's not really trust and the system is still broken.
You can revoke your trust in any CA at any time, you don't even need to see any errors! Just click the little padlock each time you visit a secure website and see if the CA is in your good books. If it's not, pretend the padlock isn't there!
OK, that's a little awkward. A browser extension could automate this. But in practice, nobody wants to do this, because hardly anyone has opinions on particular CAs. It's a sort of meta-opinion - some people feel strongly they should be able to feel strongly about CAs, but hardly anyone actually does. So nobody uses such browser extensions.
The EFF has a bad track record in this area. The last time they tried something to identify web sites, it was TRUSTe, a nonprofit set up by the EFF and headed by EFF's director. Then TRUSTe was spun off as a for-profit private company, reduced their standards, stopped publishing enforcement actions, and became a scam operation. The Federal Trade Commission just fined them: "TRUSTe Settles FTC Charges it Deceived Consumers Through Its Privacy Seal Program Company Failed to Conduct Annual Recertifications, Facilitated Misrepresentation as Non-Profit" (http://www.ftc.gov/news-events/press-releases/2014/11/truste...) So an EFF-based scheme for a new trusted nonprofit has to be viewed sceptically.
This new SSL scheme is mostly security theater. There's no particular reason to encrypt traffic to most web pages. Anyone with access to the connection can tell what site you're talking to. If it's public static content, what is SSL protecting? Unless there's a login mechanism and non-public pages, SSL isn't protecting much.
The downside of SSL everywhere is weak SSL everywhere. Cloudflare sells security theater encryption now. All their offerings involve Cloudflare acting as a man-in-the-middle, with everything decrypted at Cloudflare. (Cloudflare's CEO is fighting interception demands in court and in the press, which indicates they get such requests. Cloudflare is honest about what they're doing; the certificates they use say "Cloudflare, Inc.", so they identify themselves as a man-in-the-middle. They're not bad guys.)
If you try to encrypt everything, the high-volume cacheable stuff that doesn't need security but does need a big content delivery network (think Flickr) has to be encrypted. So the content-delivery network needs to impersonate the end site and becomes a point of attack. There are known attacks on CDNs; anybody using multi-domain SSL certs with unrelated domains (36,000 Cloudflare sites alone) is vulnerable if any site on the cert can be broken into. If the site's logins go through the same mechanism, security is weaker than if only the important pages were encrypted.
You're better off having a small secure site like "secure.example.com" for checkout and payment, preferably with an Extended Validation SSL certificate, a unique IP address, and a dedicated server. There's no reason to encrypt your public product catalog pages. Leave them on "example.com" unencrypted.
Regarding your first paragraph, I agree: all CAs need continuing scrutiny. Certificate Transparency, for example.
Regarding the rest of your post, however, I'm calling bullshit. You give very bad advice. Deploy TLS on every website. Deploy HTTP Strict-Transport-Security wherever you can.
The sites people visit are confidential, and yes, are not protected enough at the moment. (That will eventually improve, piece by piece.) That's absolutely no excuse at all for you not protecting data about the pages they're on or the specific things they're looking at, even if your site is static, or not protecting the integrity of your site. You have no excuse for that. Go do it.
Your other big problem is thinking that anything on your domain "doesn't need security"! Yes it does - unless you actually desire your website to be co-opted for use in malware planting by Nation-State Adversaries with access to Hacking Team(s) (~cough~) - or the insecure parts of your website being injected by a middleman with malicious JavaScript or someone else's "secure" login page that's http: with a lock favicon. (I have seen this in the wild, yes.) If you've deployed a site with that bad advice, it could be exploited like that today: go back and encrypt it properly before someone hacks your customers. This is why HSTS exists. Use it.
Regarding your CDN point, kindly cite - or demonstrate - your working "known attack" against Cloudflare's deployment?
Basic concept:
1) find target site A with shared SSL cert. Cloudflare gets shared SSL certs with 50+ unrelated domains.
2) find vulnerable server B in a domain on same cert. (Probably Wordpress.)
3) attack server B, inserting fake copy of important pages on site A with attack on client or password/credit card interception.
4) use DNS poisoning attack to redirect A to B.
All it takes is one vulnerable site out of the 50+ on the same cert.
The whole shared-cert thing is a workaround for Windows XP. Cloudflare does it because they're still trying to support IE6 on Windows XP, which doesn't speak Server Name Identification, and they don't have enough IPv4 addresses to have one per customer.
>If it's public static content, what is SSL protecting?
Plenty.
Off the top of my head:
It protects people/companies from having their reputations ruined by a MITM attack that replaces content on their site with something offensive.
It protects sensitive/important content on sites from being tampered with by an attacker. For example, if I am hosting a binary for download I can make a signature available for that binary on my site. In order for the signature to serve its purpose the user needs to be sure it hasn't been modified en route.
TLS gives you authenticity and secrecy; those seem like useful defaults, and in 2014, I think the question should be "how?" rather than "why?" It seems this project aims to address some of the process headaches and cost barriers that currently deter some from using TLS by default.
I do think behind-the-CDN interception, in-front-of-the-CDN compromises, and weak CDN crypto are all serious concerns. I won't name any names here, but the employment histories of major CDNs' security team members definitely deserve closer scrutiny by civil society groups and reporters, especially those interested in fighting mass surveillance.
But overall, I think it's important to respect the privacy and security of users first, and work toward solving the engineering problems that need to be solved in order to affirm that commitment to users, as these folks have tried to do.
> If it's public static content, what is SSL protecting?
In this case, SSL protects against MITM attacks. If a customer goes to the unencrypted "example.com" site and gets a bunch of ads for porn, it will give the customer a negative impression of the company. All it would take is a few pitchfork-wielding high-profile twitter accounts to cause a PR nightmare. Even if the cause is a hacked coffee shop wireless access point, it may be hard to restore public opinion.
That scenario is a long-shot, but in my opinion, the potential negative consequences outweigh the time and energy required to set up SSL (especially since a basic SSL certificate is free).
> especially since a basic SSL certificate is free
From where? StartSSL only gives out free certs to individuals. For my company, they've actually required me to get organizational validation in the past, which wasn't cheap ($200, IIRC—$100 for the organizational validation, plus $100 for stage 2 personal validation, which also required me to upload images of my driver's license and passport).
> There's no reason to encrypt your public product catalog pages. Leave them on "example.com" unencrypted.
Of course this is true in theory, but in practice, both clients and customers get 'warm fuzzies' from seeing that green lock in the URL window.
It let's them 'know' that the company they are dealing with is at least somewhat reputable. Whether this is true or not doesn't matter; it is the perception many people have, and it does affect sales numbers in the real world.
i think the realpolitik/"not really caring about users" rationale is more "when someone MITMs the person browsing your company's catalog, it still makes your company look bad". and in my opinion, it should.
Looking at the spec [0] I'm concerned about the section on 'Recovery Tokens'.
"A recovery token is a fallback authentication mechanism. In the event that a client loses all other state, including authorized key pairs and key pairs bound to certificates, the client can use the recovery token to prove that it was previously authorized for the identifier in question.
This mechanism is necessary because once an ACME server has issued an Authorization Key for a given identifier, that identifier enters a higher-security state, at least with respect the ACME server. That state exists to protect against attacks such as DNS hijacking and router compromise which tend to inherently defeat all forms of Domain Validation. So once a domain has begun using ACME, new DV-only authorization will not be performed without proof of continuity via possession of an Authorized Private Key or potentially a Subject Private Key for that domain."
Does that mean, if for instance, someone used an ACME server to issue a certificate for that domain in the past, but then the domain registration expired, and someone else legitimately bought the domain later, they would be unable to use that ACME server for issuing an SSL certificate?
This is a question about the policy layer of the CA using the ACME protocol.
The previous issuing CA should have revoked the cert they issued when the domain was transferred. But a CA speaking the ACME protocol might choose to look at whois and DNS for additional information to decide whether it issues different challenges in response to a certification request.
It's possible that this question shouldn't be decided one way or another in the specification, since it will ultimately be more a matter of CA policy about how the CA wants to handle automated issuance and risks.
I suppose they could check WHOIS at a regular interval to check whether a domain secured by one of their certs has expired, and update the state of the ACME server accordingly?
Free CA? This is cool. Why this wasn't done a long time ago is beyond me. (Also please support wildcard certs)
An interesting thing happened at a meet-up at Square last year. Someone from google's security team came out and demonstrated what google does to notify a user that a page has been compromised or is a known malicious attack site.
During the presentation she was chatting about how people don't really pay attention to the certificate problems a site has, and how they were trying to change that through alerts/notifications.
After which someone asked that if google cared so much about security why didn't they just become a CA and sign certs for everyone. She didn't answer the question, so I'm not sure if that means they don't want to, or they are planning to.
What privacy concerns should we have if someone like goog were to sign the certs? What happens if a CA is compromised?
It wasn't done a long time ago because running a CA costs money (which is why they charge for certificates), so whoever signs up to run one is signing up for a money sink with no prospect of direct ROI, potentially for a loooooong time. This new CA is to be run by a non-profit that uses corporate sponsorship rather than being supported by the market; whether that's actually a better model in the long run is I suppose an open question. But lots of other bits of internet infrastructure are funded this way, so perhaps it's no big deal.
There aren't a whole lot of privacy concerns with CA's as long as you use OCSP stapling, so users browsers aren't hitting up the CA each time they visit a website (Chrome never does this but other browsers can do).
Re: CA compromise. One reason running a CA costs money is that the root store policies imposed by the CA/Browser Forum require (I think!) the usage of a hardware security module which holds the signing keys. This means a compromised CA could issue a bunch of certs for as long as the compromise is active, but in theory it should be hard or impossible to steal the key. Once the hackers are booted out of the CA's network, it goes back to being secure. Of course quite some damage can be done during this time, and that's what things like Certificate Transparency are meant to mediate - they let everyone see what CAs are doing.
> imposed by the CA/Browser Forum require (I think!)
That's something imposed by the audit criteria (WebTrust/ETSI). What you detailed is also why roots are left disconnected from the internet - if you compromise an intermediary, that can be blacklisted as opposed to the entire root.
> Why this wasn't done a long time ago is beyond me.
While probably not officially scriptable, free certificates have been available since a long time ago: https://www.startssl.com/?app=1
Also, no free wildcard certs. Which I really want.
> What happens if a CA is compromised?
Looking at past compromises, if they have been very irresponsible they are delisted from the browsers' list of trusted roots (see diginotar). If they have not been extremely irresponsible, then they seem to be able to continue to function (see Comodo).
You raise a good point though, SSL/TLS Certs are trying to deal with two separate problems:
1. Over the wire encryption (which this handles)
2. As a bad, but the best we've got site identification system for stopping phishing mechanism.
Currently, for even the cheapest certs (domain+email validated) - the CAs will reject SSL cert requests for anything that might be a phishing target. Detecting "wellsfargo.com" is pretty easy, where it gets tricky is things like "wellsforgo.com", "wellsfàrgo.com" etc. Which if I'm looking at this right will just sail through with LetsEncrypt.
I suspect we're going to actually end up with two tiers of SSL certs as the browser makers have started to really de-emphasize domain validated certs [1] like this vs the Extended Validation (really expensive) certs, to the point where in most cases now having a domain cert does not know green (and maybe doesn't even show a lock) at all.
As a side note, Google had announced that they were going to start using SSL as a ranking signal [2] (sites with SSL would get a slight bump in rankings), from this perspective the "high" cost of a cert was actually a feature as it made life much more expensive on blackhat SEOs who routinely are setting up hundreds of sites.
If you can make microsoft.com serve up the correct challenge response, you'll be able to get a cert for them issued by the this project. This isn't a pure rubber-stamping service.
I think the issue of whether or not there should be a wide new industry borne on the back of the CA architecture, its all a bit of a red-herring, anyway. This is only security at the web browser: do we trust our OS vendors to be CA's, too? If so, then I think we may see a cascade/avalanche of new CA's being constructed around the notion of the distribution. I know for sure, even if I have all the S's in the HTTP in order, my machine itself is still a real weak point. When, out of the box, the OS is capable of building its own certified binaries and adding/denying capabilities of its build products, inherently, then we'll have an interesting security environment. This browser-centric focus of encryption is but the beachhead for more broader issues to come, methinks; do you really trust your OS vendor? Really?
If each domain name can get a non-wildcard cert for free, quickly, why do you need wildcard certs? For multi-subdomain hosting on one server? Just wondering.
For my previous use cases, it's ideal for dynamically created subdomains of a web application. If I know ahead of time, it's easy to grab a cert for any subdomain. However if a user is creating subdomains for a custom site or something similar, it's much nicer/easier to have the wildcard cert.
Lots of services create dynamic subdomains in the form of "username.domain.com". To offer SSL on those domains without a wildcard certificate, you'd need to obtain a new certificate and a new IPv4 address every time a user signs up. You also need to update configuration and restart the web server process.
Google is a CA, and they sign their own certs as "Google Internet Authority G2" under SHA fingerprint BB DC E1 3E 9D 53 7A 52 29 91 5C B1 23 C7 AA B0 A8 55 E7 98.
They're subordinate under another CA (GlobalSign), and presumably contractually obligated to only sign their own certs. GlobalSign offers the following service to anyone willing to pay the sizable fee, undergo a sizable audit, comply by the CA/Browser forum rules, and only issue certs to themselves:
I couldn't be happier about the news, the EFF and Mozilla always had a special place in my heart. However, the fact that we have to wait for our free certificates until the accompanying command line tool is ready for prime time seems unnecessary. Another thing I'm interested in is whether they provide advanced features like wildcard certificates. This is usually the kind of thing CA's charge somewhat significant amounts of money for.
The thing that's causing the delay is not the client software development, it's the need to create the CA infrastructure and then perform a WebTrust audit. If we were ready on the CA side to begin issuing certificates today, we would be issuing them today.
I think I may have misunderstood all of you. Is the audit process itself really that time consuming? I can imagine the amounts of bureaucracy involved, but I can't image this takes much longer than, say, a month or so. Most of the time is probably spent waiting for someone or something, right? I mean we're talking about very capable people here who have done this kind of thing before.
I doubt the actual CA has been setup either. They're setting up their own root while cross signing from IdenTrust, that's not a one day activity. Auditors have to be present, software has to be designed and tested, etc.
It's true that this shouldn't be done in a day, but it's trivial compared to building a command line tool that automatically configures HTTP servers and designing an open protocol that issues and renews certificates. This is especially true if one of your partners is a CA.
---
Let me be clear here: I'm not complaining that I don't get my free cake now. I do think however that most people at the EFF and Mozilla would agree that we needed something like this a couple of years ago. In that context I think it's a least noteworthy that they decided to wait until other parts of the system are ready.
There's a scenario (simplified for illustration, but entirely possible) that's normally not a huge risk because there are many CAs, and they are private, for-profit companies that have an economic incentive to protect you and your certificate's ability to assure end users that a conversation's privacy won't be compromised.
1) browser requests site via SSL
2) MITM says, "let's chat - here's my cert"
3) browser asks, "is this cert legit for this domain?"
4) MITM says, "yes, CA gave us this, because of FISA, to give to you as proof"
5) browser says, "ok, let's chat"
I'm not trying to spread FUD, but if you're NSA and you've been asking CAs for their master keys for years, doesn't a single CA sound great (free and easy == market consolidation), and doesn't EFF seem like the perfect vector for a Trojan horse like this, given its popularity and trust among hacker types gained in recent years?
We will look for ways to mitigate the risk of misissuing for any reason, including because someone tries to coerce us to misissue. One approach to this that's interesting is Certificate Transparency.
There's also HPKP, TACK, and DANE, plus the prospect of having more distributed cert scans producing databases of all the publicly visible certs that people are encountering on the web.
I'm not sure I follow that line of reasoning. Each CA is independently and completely able to issue certificates (not counting EV, but let's leave that out). There are hundreds of CAs. Depending on your trust store, some of them are literally owned by the US Department of Defense. Others are owned by the Chinese government.
How does having _fewer_ CAs make anything easier? Why is the EFF a better route than any of the various other companies that have gotten themselves in the CA program? And given that all the CAs are equivalently trusted at a technical level, why does the human trust afforded the EFF affect whether it's a better target?
This is not an attempt to reduce the CA system to a single CA. The intent here is to provide a simple and free way for anyone to get basic DV certs. If we can also contribute to CA best practices, and help improve the CA system in general, we'd like to do that too.
Let's Encrypt is only planning to issue DV certificates, since that is the only type that can be issued in a fully automated way. Many organizations will want something other than DV, and they'll have to get such certs from other CAs.
Also, our software and protocols are open so that other CAs can make use of them.
Let's Encrypt is going to publish records of everything it signs, either with Certificate Transparency or some other mechanism.
Browsers will be able to check any cert signed by the Let's Encrypt CA against the published list. If there's a discrepancy, that will be immediately detectable.
Out of curiosity, what if MITM says, "include me in this list for IP <user IP>"? If the check is not done in a way that solves the byzantine generals problem, I don't see how this feature provides any more protection, other than one more hoop to jump through.
The NSA (or any other agency) only has to coerce any single CA to cooperate. As long as it's in the standard set shipped with browsers, its certificates are accepted.
And pretty much every major government directly or indirectly controls one or multiple CAs that are in the standard set.
The "How It Works" page, https://letsencrypt.org/howitworks/, has me a bit worried. Anytime I see a __magic__ solution that has you running a single command to solve all your problems I immediately become suspicious at how much thought went into the actual issue.
If I'm running a single web app on a single Ubuntu server using Apache then I'm set! If I'm running multiple web apps across multiple servers using a load balancer, nginx on FreeBSD then...
All the same I'm really looking forward to this coming out, it can be nothing but good that all of these companies are backing this new solution and I'm sure it'll expand and handle these issues as long as a good team is behind it.
What you're seeing today is demos, not the software in its final form. You're also seeing it demo'd with a focus on the most simple usage. There are, and will be, advanced options.
We'll be doing quite a bit of work based on user feedback between now and when we go live. We're well aware that we need to cater to a variety of types of users.
I run Apache httpd, and there's no way I'd let a wizard anywhere near my configuration files or private keys, much less run it on a production server.
I think it's about time for a free CA that is recognized by all clients, but you still need to establish a trust chain to exchange a CSR for a signed certificate. This service needs to be server agnostic. The barrier to adoption isn't configuration, and HTTPS isn't the only thing that uses certificates.
There are lots of different barriers to adoption. With this project we are attacking several of them at the outset, including the cost of obtaining a certificate, and the inconvenience or difficulty of obtaining and installing it for users who don't do that every day.
Because of the open protocol we also aspire to support users with more complex configurations and requirements, who are absolutely welcome and encouraged to write their own implementations of the protocol and integrate with their own existing certificate management and configuration methods. If you think of other barriers to adoption that we can help with, please let us know and we'll try to address them; if you just want our certs for free, please get them and enjoy!
Yes, this will only hit the common small-site case. Hopefully if you're running "multiple web apps across multiple servers using a load balancer" you will have the skill to configure HTTPS properly for that situation, which will probably involve custom configuration on the load balancer. It's not a criticism of something trying to solve the common case, where the common solution up until today is pretty much "just forget about it", that it doesn't work at "cloud scale".
It doesn't seem as magical when you drill down. And if you roll your own nginx or whatever, it'll be less transparent still. But yeah, someone like Ubuntu or Red Hat could enable this on their product that simply.
Domain validation is done through a challenge (issued by a CA) to sign arbitrary data and put on a URL (covered by the domain) the CA can then query. This seems pretty solid. Better then email.
Generate key, Generate CSR, Send CSR, Receive Certs from CA, Verify ownership, Install certs
Presumably their command line client creates the key, the CSR, sends the CSR, then gets back the certs (at least I'd hope so). I'd be happy to use a vetted command line utility which did that, or even just parts of that process, if I were sure the private key were not transmitted. It's just automating stuff which with current CAs needs to be done manually.
The tool will gather the domains, use the CA API to validate ownership, obtain the certs (which cannot be unilaterally created since they are based on a public/private key pair) and manage their expiry.
That wouldn't be safe, because then they would have access to your private key and impersonate you. Having you (indirectly via their script) generate the key and submit the public key for signing means your private key never leaves the premises.
It's primarily because of the interactive challenge to prove that you control the domains you're requesting the cert for.
If you want, the client can just give you the cert at the end instead of installing it. In the common case for a user who's not currently comfortable with the process, the client is automating several things -- generating a private key and CSR, proving control of the domain, and installing the key and cert in the server.
What kind of abuse where you thinking about? If the domain is hijacked, you simply repossess the domain and request a new certificate and the old one is revoked.
As in, revoking a cert for a known C&C box, or a confirmed spammer, confirmed box serving an exploitkit, confirmed phishing domain (such as my-apple-ikloud-verify.foo)
Basically, my assumption is they won't want to be providing certs to known bad actors. So I'm curious who is going to own the abuse handling for the CA.
This seems like a really great step toward an HTTPS web. It will be an immediately deployable solution that can hopefully TLS encryption normal and expected.
However, it doesn't do anything about the very serious problems with the CA system, which is fundamentally unsound because it requires trust and end users do not meaningfully have the authority to revoke that trust. And there's a bigger problem: if EFF's CA becomes the standard CA, there is now another single point of failure for a huge portion of the web. While I personally have a strong faith in the EFF, in the long term I shouldn't have to.
Agreed. For all the hoopla, this is basically just like any other CA (but free). Until we have a truly distributed (namecoin-esque) and accepted CA structure, signed certificates may as well be pipes directly to the NSA.
That said, not having to pay some jerk for sending me an email and having me enter a code is really nice. The current CA system is a pitiful excuse for identity verification, and not having to pay for it will be nice.
Here's my current issue with moving to TLS: library support.
I do a lot of custom stuff and want to run my own server. I can set up and run the server in maybe 50-100 lines of code, and it works great.
I know, I should conform and use Apache/nginx/OpenSSL like everyone else. Because they're so much more secure, right? By using professional code like the aforementioned, you won't get exposed to exploits like Heartbleed, Shellshock, etc.
But me, being the stubborn one I am, I want to just code up a site. I can open up a socket, parse a few text lines, and voila. Web server. Now I want to add TLS and what are my options?
OpenSSL, crazy API, issues like Heartbleed.
libtls from LibreSSL, amazing API, not packaged for anything but OpenBSD yet. Little to no real world testing.
Mozilla NSS or GnuTLS, awful APIs, everyone seems to recommend against them.
Obscure software I've never heard of: PolarSSL, MatrixSSL. May be good, but I'm uneasy with it since I don't know anything about them. And I have to hope they play nicely with all my environments (Clang on OS X, Visual C++ on Windows, GCC on Linux and BSD) and package managers.
Write my own. Hahah. Hahahahahahahahah. Yeah. All I have to do is implement AES, Camellia, DES, RC4, RC5, Triple DES, XTEA, Blowfish, MD5, MD2, MD4, SHA-1, SHA-2, RSA, Diffie-Hellman key exchange, Elliptic curve cryptography (ECC), Elliptic curve Diffie–Hellman (ECDH), Elliptic Curve DSA (ECDSA); and all with absolutely no errors (and this is critical!), and I'm good to go!
I'm not saying encryption should be a breeze, but come on. I want this in <socket.h> and available anywhere. I want to be able to ask for socket(AF_INET, SOCK_STREAMTLS, 0), call setsockcert(certdata, certsize) and be ready to go.
Everything we do in computer science is always about raising the bar in terms of complexity. Writing software requires larger and larger teams, and increasingly there's the attitude that "you can't possibly do that yourself, so don't even try." It's in writing operating systems, writing device drivers, writing web browsers, writing crypto software, etc.
I didn't get into programming to glue other people's code together. I want to learn how things work and write them myself. For once in this world, I'd love it if we could work on reducing complexity instead of adding to it.
Wow, of all the arguments I could think of against the current CA/TLS/HTTPS situation, a hobbyist deciding to write their own web server would not be one of them... Yes, you should just conform and stop doing this. Or at the very least you could let another process to TLS termination and just handle HTTP if you really want to create your own off-by-one remote code execution errors instead of using the ones supplied by apache et al.
> a hobbyist deciding to write their own web server would not be one of them
nginx started out as a hobby project by Igor Sysoev. Maybe he should have just used Apache too?
> Or at the very least you could let another process to TLS termination and just handle HTTP
A well-designed HTTPS->HTTP proxy package could work. Install proxy, and requests to it on 443 fetch localhost:80 (which you could firewall off externally if you wanted) and feed it back as HTTPS. Definitely not optimal, especially if it ends up eating a lot of RAM or limiting active connections, but it would be a quick-and-dirty method that would work for smaller sites.
But it won't handle other uses of TLS, such as if you wanted to use smtp.gmail.com, which requires STARTTLS. Or maybe you want to write an application that uses a new custom protocol, and want to encrypt that.
If you put this stuff into libc, and get it ISO standardized and simplified, and have it present out of the box with your compilers on each OS, then you'll open the door for developers to more easily take advantage of TLS encryption everywhere.
> Let's Encrypt will be overseen by the Internet Security Research Group (ISRG), a California public benefit corporation. ISRG will work with Mozilla, Cisco Systems Inc., Akamai, EFF, and others to build the much-needed infrastructure for the project and the 2015 launch
What's Cisco's role in this? I'm quite worried about that. It has been reported multiple times that Cisco's routers have NSA backdoors in them, from multiple angles (from TAO intercepting the routers to law enforcement having access to "legal intercept" in them).
So I hope they are not securing their certificates with Cisco's routers...
Lawful Intercept isn't a blanket government back door per se. It's a featureset that allows the operator to configure what is effectively a remote packet capture endpoint. That endpoint is disabled by default and requires operator configuration to be enabled.
It just happens that every ISP/telco in the US needs this capability to comply with CALEA so it's manufacturers responding to market forces. Juniper supports it, A Latvian router manufacturer supports it (http://wiki.mikrotik.com/wiki/CALEA), there's even open source code to do it (https://code.google.com/p/opencalea/) if you're building your own routers.
There's a place to focus your ire over wiretapping. The manufacturers aren't it.
Yep. Read the product docs on some of their big network routers. You'd be surprised what's in there in terms of port mirroring and what they suggest carriers should use it for.
We have DNS system in place which should be enough to establish trust between browser and SSL public key. E.g. site could store self-signed certificate fingerprint in the DNS record and browser should be fine with that. If DNS system is spoofed, user will be in bad place anyway so DNS system must be secured in any case.
1. I really hope this is hosted in a non-FVEY territory.
2. Why can't we set a date (say, 5 years?) when all browsers default to https, or some other encrypted protocol, and force you to type "http://" to access old, unencrypted servers?
from the ACME spec, it looks like proof of ownership is provided via[0]:
>Put a CA-provided challenge at a specific place on the web server
or
> Put a CA-provided challenge at a DNS location corresponding to the target domain.
Since the server will presumably be plaintext at that point and DNS is UDP, couldn't an attacker like NSA just mitm the proof-of-site-ownership functionality of lets-encrypt to capture ownership at TOFU and then silently re-use it, e.g. via Akamai's infrastructure?
(1) You can do the attack you describe today with existing CAs that are issuing DV certs because posting a file on the web server is an existing DV validation method that's in routine use.
(2) There is another validation method we've developed called dvsni which is stronger in some respects (but yes, it still trusts DNS).
(3) We're expecting to do multipath testing of the proof of site ownership to make MITM attacks harder. (But as with much existing DV in general, someone who can completely compromise DNS can cause misissuance.)
(4) If the community finds solutions that make any step of this process stronger, Let's Encrypt will presumably adopt them.
Let's Encrypt can run a web spider - crawl the web to build a database of actively used domain names.
Periodically poll DNS for the list from that database to obtain the NS records for pretty much all of the web, and also A records for all the actively used hosts you find in the crawl. Keep this cache as a trace of how DNS records change.
Now, do the DNS polling from several different geographic locations. Now you've got a history of DNS from different viewpoints.
When you get a request for a certificate for, say, "microsoft.com", look up the domain name in the way described on the Lets Encrypt description. But also check that this IP address appears in the history, either from multiple locations for a few days, or from one location for a few months.
If this test fails, check if the historic IP addresses for this domain from the polled cache are already running TLS, signed by a regular CA. If so, reject the application.
Otherwise continue with validation in the way described on the Lets Encrypt web page.
agree completely and it's worth noting that i don't have a solution to the issues i mentioned, either.
leveraging other (potentially-insecure) paths to establish trust might help further enhance confidence in authenticity; e.g. verification using something like the broad-based strategy of moxie's perspectives (except via plaintext) or maybe through additional verification of plaintext on the site as fetched via tor or retrieving a cached copy of the site securely from the internet archive or search engines.
dvsni and multipath testing sound quite interesting, and i think defense in depth is the right approach.
having been at akamai's recent edge conference, i didn't hear much from them on this. does anyone have any additional details of their interest in the project?
domain squatters are already an issue. imaging if you could register domains for free. I think having to pay $10 for a year is pretty fair. That's one reason I don't mind paying ~$70 for .io domain. it keeps most squatters away.
You misunderstand. Domains must not be free, and domain cost isn't the problem. Nonprofit registrar != free domains.
The problem is the horrible user experience of registrars like Godaddy. I'd rather give my money to a nonprofit that isn't confusing non-technical website owners into buying products they don't need.
The registrar landscape is better now with Gandi, but still I'd rather pay a fully transparent nonprofit registrar if one existed.
Won't people need to have LetsEncrypt CA certificate installed on their computers to not get that red SSL incorrect certificate thing? Other than that, this is awesome.
Thanks for the clarification! You might want to add that point to your technical how-it-works section[1]. I was wondering how older browsers would accept a new CA's signature.
Also, I really wish AOL would have donated their root certs to y'all[2] so you didn't have to set up a whole new CA.
or e-mail me about them. So far this has only been tested on a handful of configurations and will clearly need to be tested on many more over the next few months.
Please be careful when running it on your live server: if it does manage to get a cert right now, that cert won't be accepted by clients and will produce cert warnings (and if you use the "Secure" option at the end, you'll also be generating redirects from the HTTP site to the cert-warning-generating HTTPS version).
How does a CA that's formed by a conglomerate of U.S. companies (under the jurisdiction of the NSA) make us any safer than we are currently? It doesn't. The chain of trust chains up all the way to a U.S. company, which can be coerced into giving up the certificate and compromising the security of the entire chain. I'm on the side of the EFF trying to encrypt the web, but this is not the solution.
truth be told, it doesn't make anyone safer. it's a big fat placebo, especially once the NSA realizes that this project is entirely under their jurisdiction.
Now, if there was a project in Iceland or Seychelles that was doing something similar, I would be much more apt to participate.
Security theatre for the win(?) Do these people [EFF] not realize that the people they're trying to win over are network nerds? These are people that actually understand this shit and the repercussions of it.
I can't profess to understanding all the details of encryption infrastructure, but I learned very quickly in kindergarten, you can't trust anyone you don't know. It doesn't matter who they are, who they know or what they know. Half the time, you can't even trust "cold hard facts", the facts are frequently misinterpreted, fabricated or eventually proven to be wrong - once it was a fact that the earth was flat, then we were the centre of the universe, now the universe as we know it is held together by a God particle. Science claims facts that invalidate there being a God... all facts are a matter of our fallable understanding of this scientific instrument we are building. Even people you do trust can be coerced into doing things that compromise your ability to trust them or their motives.
If you want to automate trust, then you're eventually going to have to realize that you can't. All you can do is mitigate the cost of being wrong.
Absolute power corrupts absolutely - the CA (or whoever controls that CA) has absolute power in this scenario. If you have the director's family hostage, everyone else's security just went down the pan.
Chain of trust is like putting all your eggs in one basket. You just don't do it. Web of trust is a marginal step up, but it's more of a pain in the ass and can also be overcome by a group with malicious intent.
Something like Certificate Transparency would counter that - where the browser can only accept certificates that have been made public record. So the owners will at least know when their domain has been attacked.
A site owner would normally know when their logs are no longer accumulating traffic that something was wrong. When their site still appears to be up and they get as far as analyzing router logs to realize that they're actually getting no traffic, even though the site appears to be functioning normally would be a huge red flag that something is very wrong. I would expect any operations team worth their salt to understand this inside of 15 minutes anyway.
Certificate Transparency may help to alert people, it's certainly a step in the right direction, but it doesn't fix the problem in my mind. I honestly don't think the problem can be fixed. All we can do is try and mitigate the risk of our trust being broken.
So this means that GoDaddy, Namecheap, Verisign and other sellers/resellers of SSL certificates will need to lower their prices soon, right? Because in a short time many websites won't need to purchase one since they can get it free.
Also, have they built this system with a completely scalable distributed architecture? For it to be practical it needs to be performant.
Also, does the NSA have access to the core of this system?
Note that the Verizon issue isn't anything entirely content altering but someone who lives in a country with strict monitoring of traffic could easily change the wording of your website to match their propaganda if you aren't using HTTPS.
So yes, your content is publicly available free stuff and no one is probably sending you user login credentials or credit cards but it still matters.
Is there any reason why I would want to use https for this use case?
Yes it can help you stop:
ISPs inserting adverts into your content (this has happened)
Governments censoring your content or rewriting it
Governments putting people in jail for reading your publicly available (in your country) content, which is illegal in theirs
People impersonating your website
But if you don't want to use it, that's cool too. I suspect all websites will be encrypted at some point soon though, the disadvantages are getting less and less important.
>Governments putting people in jail for reading your publicly available (in your country) content, which is illegal in theirs //
How does that work, surely the gov can still see people accessing the information by monitoring network traffic and the info itself is still public. HTTPS doesn't encrypt the actual request traffic does it, and in any case the gov would still see which server the traffic is going to unless you're using something like tor [and possibly still then].
Sure! If I trust your site but not my ISP, then https allows me to trust the connection between us. That means that nobody can tamper with the content and inject some malicious JS. Also, the ISP could only tell that I am talking to your server, and not anything beyond that.
People could think they are reading an article from your site but actually they're not or the text was tampered with. With https you ensure people are actually reading what you published.
Honestly, you're probably not going to get a large personal benefit from this. The larger good is that you'll be helping move the Internet toward encrypted-by-default, which is an enormous societal benefit.
Like you, I don't host any private or remotely sensitive information. I'm encrypting my site because I think it's the right thing to do, even though there's little personal return on investment.
While this is nice and I'm happy to see such a product coming, I still don't see a free TLS solution for my smaller projects. Heroku will still charge me $20/mo for TLS even if I have my certificate. Cloudflare will also want to charge me to inspect TLS. I could drop both and get a Linode but then that costs too and is a pain to setup a server myself.
"With a launch scheduled for summer 2015, the Let’s Encrypt CA will automatically issue and manage free certificates for any website that needs them."
'Automatically?'
So we're replacing owning people by snooping on their HTTP traffic with owning people by directing them to fake websites digitally signed by "m1crosoft.com"?
... actually, yes, that is kind of an improvement.
No way I am running something like this on a production machine.
I like the idea but I would rather have the client just output the certificate and key in a dir so I can put the files where I need them and I can configure the changes to my webserver.
Also this does not solve the issue of a CA issuing certificates for your domain and doing MITM.
This is just a pre-announcement to let folks (OSes, hosting providers, other platforms) plan and do integration work. Per our own warnings, we definitely don't want this running on production machines until it launches in 2015.
Our Apache code is a developer preview, we'll be working on Nginx next.
ISRG will be operating a new root CA for this project. Although if you think that your choice of CA makes you more or less secure, you may not have understood how PKIX works -- you can buy a cert from whichever CA you like, but your adversary can always pick the weakest one to try to impersonate you.
> ISRG will be operating a new root CA for this project.
Are you going to be cross-signed by IdenTrust or something? If you're really going to try and create a new root CA from scratch, surely you will be impaled on the spike of low coverage for many years?
"ISRG will be operating a new root CA for this project."
Does that mean every client/browser will need to be updated to include the new CA? Or will it somehow be signed by other (competing) CAs?
I like the idea of this project, and I think it's a great thing for the Internet - I just worry that it will take a long time for it to be usable in practice.
We're putting out a protocol for requesting certs and validating domain control (that our new CA will support -- and we'll be cross-signed to work in all mainstream browsers) and we've already written a client for it that integrates with Apache and can edit your Apache config.
If you're comfortable editing your own Apache configs, then you'll only need to use the client to obtain the cert and not to manage its installation long-term. (The client does need to reconfigure the web server while obtaining the cert as part of the CA validation process.)
The protocol is openly published, so you can write your own client too, or follow the protocol steps manually -- or any web server developer or hosting platform can develop their own alternative client.
There does need to be some client to speak the protocol, but there's no attempt to force you to use it to manage your certs and configs if that's not what you want. The convenience is aimed at people who don't understand how to install a cert, or who think that process is too time-consuming.
ACME is a protocol for securely and automatically issuing certificates. Presumably Apache, Nginx, IIS and any other web server could take advantage of it.
They are creating a new CA for this purpose.
I can't imagine this will replace the manual learn-pay-experiment-error-success process we have at the moment, so experts will still be able to control the process manually.
It doesn't address CA MITM attacks but it does significantly reduce non-CA MITM attacks by remove the two primary objections to deploying SSL: cost and complexity. Bravo EFF - this could be the most significant step in SSL adoption ever.
For those who are wondering why sschueller is saying such things (when I first read his comment my reaction was "how the f* could this be limited to Apache?", which worried me since I mostly use lighttpd), see the How It Works page [1] of Let's Encrypt.
> I would rather have the client just output the certificate and key in a dir
"This code intended for testing, demonstration, and integration engineering with OSes and hosting platforms. Currently the code works with Linux and Apache, though we will be expanding it to other platforms."
I'm sure they'd support "manual setup". Not many sites would opt into running their software agent (yet). I'm expecting the client to come with a lot more benefits than "simple setup", though.
I think that plenty of site admins will be happy to run this software agent—remember, there are many site admins right now that aren't even bothered to set up TLS at all.
I run plenty of tools right now on production boxes that I personally haven't fully audited—we all do. This tool should be simple and widely used enough that it will be trustworthy.
For the cautious, it would be nice if the tool offered a mode that could be run as a normal user, even on a different machine. It'd have to be an interactive process:
1. "Enter domain to be signed."
2. "To validate ownership of the domain, create a TXT record on xxx.example.domain with 'na8sdnajsdnfkasdkey' as the value."
3. "Domain ownership has been validated. Please paste the CSR."
4. "The zone has been signed. Here is your certificate:"
Much less convenient (basically the same as the process with current CAs), but it would allow security-conscious admins to use the CA in a way that is comfortable for them. Since the tool is open source, it should be fairly easy for someone to write their own tool that speaks to the CA while providing this interactive process.
The tool is interesting and to be honest, I'll be comfortable using it (I'm not running anything high-profile or sensitive). However, the real news here is the new CA—the tool is merely a convenience.
I don't want to be a full-fledged sponsor but I'd love to see a donate function to their site. Once this is released if the CA is trusted by all the major browsers I am more than willing to shift all the money we spend in certs from all these other "authorities" to something constructive like this.
The real problem here is that http is unencrypted by default. It really should be encrypted so that passive listeners can't see the traffic.
I know that this is no protection against man in the middle attacks, but at least WiFi sniffers and similiar would be stopped. State Actors would have to actively do something which might be registered.
It would be a great improvement, because in the current system, most websites are going to stay unencrypted because it takes money and effort to set up a certificate.
The millions of shared hosters won't do it by default.
What we can do:
- Change the http protocol to be encrypted?
- create an apache module that automatically does this and needs no setup time (generate private keys automatically?)
Of course there shouldn't be any indicator of this encryption in the adress bar of the browser.
This is an awesome idea. But I thought the whole idea of a certificate authority is so that we can trust that the CA has vetted the person/site that they have given the certificate to. If all they do is issue certs for free, all we get is encryption, but no identity verification.
With basic certs, the CA just verifies that the entity controls the website the cert is being issued for. The OP explains how Let's Encrypt will do that. (And if they appeared not to be doing that, no software vendors would include the CA in the trust list).
With an "Extended Validation" cert, the CA additionally verifies that they are who they say they are on the cert (not just that they control the (web)sites the cert was issued for). I'm not sure if Let's Encrypt plans on issuing EV certs, but if they are, they will have to comply with whatever verification standards are standard, in order for vendors not to revoke them from trusted stores. Same as anyone else.
Further upthread, Josh (one of the other people working on the project) explained that Let's Encrypt currently only has plans to issue DV certs, not EV certs.
This is because of the automation aspect. EV cert issuance involves a human being looking at offline identity; DV issuance involves proofs of control that can be checked online by a computer, just as existing DV issuance by existing CAs is based on such checks.
Would you like to spell out more explicitly which effects of U.S. jurisdiction you're most concerned with?
I agree that there are several possible effects of jurisdiction on CAs that people could reasonably be concerned with (whether as would-be certificate requestors or would-be relying parties), but I'm wondering which ones are concerning you most.
It would be nice to have support for ECDSA certificates. I've not found a CA yet who'll provide one of these, despite the fact that many clients to already support them. Unfortunately, after a brief look through client.py I can't see any support for this.
Is there any good way of filing an RFE or contributing a patch?
ECDSA certs are much cheaper to decrypt, and there's still some places (especially mobile) where TLS is a noticeable overhead - it'd be great to have a CA that provides them.
Indeed, I think that might be viable: it certainly was for CloudFlare! And good ECC certainly is "modern security techniques and best practices". I would however be OK with RSA-2048 using SHA-256, I guess; it's what many other CAs currently use (and this is partially a lowest-denominator problem).
Comodo definitely has an ECC root available now, a cross-signed ECDSA root with secp384r1, signing a secp256r1 intermediate. (I had heard there were 3 others deployed out there, although off the top of my head I'm not clear about which they are, perhaps they're also cross-signed?)
Why is ECC so poorly deployed in TLS? I've heard indications Certicom had formerly aggressively asserted patents, hence the lack of ECC-supporting CAs; but I don't know which ones. I highly doubt they're still extant, however: many have since expired.
Do be aware however that ECDSA can present a huge hazard if the k value needed for every signature is even partially predictable and varies. Officially, it should be random, and very strongly random (even the first two bits being consistently predictable is cumulatively disastrous; using the exact same k to sign two different things is absolutely catastrophic and is how the Sony PS3 root keys were calculated!). If you have a strong PRF for your RNG, you should be fine (e.g. LibreSSL uses ChaCha20); if you want some insurance just in case, you can use a more-auditable and less fragile deterministic process with a strong PRF so the same signature always gets the same, unpredictable k (see RFC 6979), or a combination of the two approaches (e.g. BoringSSL). DSA also had this issue. If you haven't audited your RNG and know it's strong, maybe you should check it before you deploy ECDSA: if your system's headless, its entropy is running on empty and your idea of a mixing function is RC4, it might not be such a hot idea.
It'd be fantastic to have the option available to have an ECC root around, and let us have RSA or ECC certs. (Yes, you can cross-sign across algorithms.) Perhaps have ECDSA off by default for a while in light of the above, but it can provide very good performance for people who use modern software and turn it on!
I'd suggest using secp256r1 (aka NIST P-256). It's already deployed and 256-bit curves lie at about RSA-3072 strength (stronger than most deployed CAs now, which typically use RSA-2048). A few others have deployed secp384r1, but that was following the NSA's Suite B lead; I'm not sold on that being relevant at all. secp256r1 is also fairly fast, with some very well-optimised constant-time routines available in OpenSSL if you enable a flag or even faster ones if you're using 1.2 beta (expect something like a 200%-250% performance boost over the generic elliptic curve routines); it's not quite Curve25519 speed, but it isn't bad.
I do however acknowledge the extreme murkiness surrounding the generation of the NIST/SECG/X9.62 curves. That does present some concern to me. I tried to get to the bottom of that (see my posts on CFRG) and I can summarise what I found out as basically (now-expired/irrelevant) patent-related shenanigans. I'm not super-comfortable with that degree of opacity in my curves - however, I also don't know of any actual security problems with secp256r1 (or secp384r1) as they stand, providing they are properly implemented (very big proviso!). I don't think they're backdoored, but make sure you check the hinges on the front door, and I'd prefer a house with better foundations!
More transparently-produced curves (such as the Brainpool curves) do exist, but Brainpool is sadly very slow in software (less than half the speed than a good P256 routine, and no scope for optimisations).
So looking forward, CFRG (at IRTF) were asked by the TLS Working Group to recommend even better new curves: most probably Curve25519 in my opinion as that seems to admirably satisfy all the criteria for a fast strong mainstream curve, and probably one larger "extra paranoid" curve will be recommended which I really don't know what it'll be at this stage. These hopefully will be/are even faster, strong, and rigidly-explained, without murky origins. And hopefully there will be better algorithms than ECDSA (perhaps a Schnorr-based algorithm such as Ed25519, now the patent expired?). I very much doubt if all the supporting infrastructure for that like HSMs and widely-deployed software support will all be "ready" for this project, however, in the timeframe we'd like, however, so in the meantime, P256 or RSA-2048 I guess is OK.
In general: This is absolutely wonderful news. It, and the efficiency of TLS 1.2 (and later, TLS 1.3) will enable people to run TLS everywhere. I am very probably going to use it myself.
While the Israel-based CA StartCom do already offer free TLS certificates and I have previously lauded them for that, they pulled an absolutely unforgivable move detrimental to internet security as a whole in refusing to revoke and rekey certificates for free even exceptionally in the immediate wake of Heartbleed (and I do think they should have their CA status reviewed very harshly as a result or revoked, because I do not think that is compliant with CA/B guidelines: they definitely still have live signatures on compromised keys that have not been revoked, which is totally unacceptable). If this initiative means we can replace and dump bad CAs, it's even better news.
Hm? The overhead reduction is on the server. Verification IIRC is somewhat slower.
There are CAs that issue them, uh... someone managed to get one issued during the TLS WG meeting at IETF last week. I'd have to listen to the audio to find out what CA they used.
I'm hoping that one day soon, I'll be able to remove this line from my nginx config:
ssl_certificate /path/to/file.crt;
My web server will notice that I want SSL, but haven't specified a path to a cert. It will then go off and generate one and get it signed automatically using an API like the one being discussed. It will also handle renewing automatically when the time comes.
This is a great initiative. On the other hand, I'm beginning to think that security models based on any central authority will always be at risk of getting compromised from within. Techniques that allow trusted security to be established between two parties without the need for a third-party authority to validate them would be nice to see.
That's a great idea and I'm a big fan of the EFF. But what browser support will this have? Even if all browser on all platforms add this to their root certificates, how many years will it take before even half of the devices in use support it (remember the number of people still using windows XP!)
ACME sounds great. Copying codes from emails is suboptimal at best. Free certificate from command line and free revocation from the same client sound even better.
I just don't know about the automatic configuration tool. Like webpanels for managing a server, it has never worked for me.
I wonder how many of the cheap web hosts will impelent this. I think the increased hosting cost on top of the certificate itself also discourages people from using TLS. Wishful thinking, perhaps...
It's an interesting idea, I'm just not clear on how it works (even when looking at the "How it works" section) - e.g., how do I integrate this with... say, nginx?
TLS/SSL certificate setup is a pretty mechanical task. I would imagine their program detects common web servers (nginx, apache, etc), puts the private key somewhere, and points the configuration files at it.
afaik there will be an api to request just the certificate and you'll have to integrate it manually, or a special program which will automatically add it to nginx for you (presumably only for simple setups)
I just setup a ssl certificate on my website for the first time and it only took like 10 minutes all together. I don't get any warnings from any browsers and it was only. $10
I'm so excited for this. I know both the people working on the team from the University of Michigan, and both are extremely smart people passionate about web encryption.
Wow, I had a goofy idea a few months ago that one day we could have some sort of non-profit/charity that just runs a free, as in beer and freedom, "common good" CA.
This is great news, but I am wondering how they will handle revoking certificates. For example: Do we really want malware sites popping up with valid Ssl certificates?
various 3rd parties. this is required by the cab forum which my browser requires as well.
inform yourself if you want to write stuff like that.
even more, its sad that people think CAs have zero checking and just give what, money to browsers to be included? Thanksfully its not like that yet.
Who's auditing the auditors? Remember Moody's? It's not entirely analogous, but it's not far from it.
At some point down the chain, you have to rely on trust to some degree. Either disappear in to the wilderness and completely disconnect from the grid or - at some point - you have to trust someone.
Kudos to the EFF for making an easy-to-use tool to generate TLS certs!
Kudos also for creating the second CA to issue free certificates (the first being StartSSL).
The next step needs to be to man-in-the-middle (MITM) proof these certs. We still have to address that problem. We'll be talking about how the blockchain can be used to solve this problem tonight at the SF Bitcoin Meetup, if that interests you, you're welcome to come:
The blockchain can't fix this problem - it's too large for most embedded devices, which do matter. It isn't a solution just because it's a 'cool new crypto idea' to every problem on the planet. Just because something uses crypto doesn't mean adding the blockchain to it makes it any better.
Hmm, I'm not sure exactly how the parent intends using it, but isn't something like the blockchain (i.e. a publicly auditable log of all changes to certificates) already being proposed for improving the current PKI infrastructure? Also, is the problem you have for embedded devices that they can't afford to download, store, and verify the full bitcoin block-chain? Surely there are compromise solutions that could be made? Not denigrating your comment btw, it's an interesting observation!
One possible solution is a BitCoin-like block chain of certificate proof, so that a website's certificate can be verified against the domain without a central authority.
What authorization is required in this scenario? I'm talking about a novel idea here, one that doesn't fit into the existing CA model. There would be no CA in this scenario; verification would be decentralized, based on shared information, not on knowledge of a secret.
And we should do this, because this Let's Encrypt CA, while a great step forward, is still vulnerable to man-in-the-middle attacks, explained in this video:
Those free certs will encrypt from the browser to CloudFlare's CDN. You still need to do something to encrypt from CloudFlare to the publishing webserver. Self-signing can work for that hop, though Let's Encrypt may wind up being smoother for sys admins.
We've been working with CloudFlare to drive HTTPS adoption, and plan to work with them further on integration.
This certificate industry has been such a racket. It's not even tacit that there are two completely separate issues that certificates and encryption solve. They get conflated and non technical users rightly get confused about which thing is trying to solve a problem they aren't sure why they have.
The certificate authorities are quite in love that the self-signed certificate errors are turning redder, bolder, and bigger. A self signed certificate warning means "Warning! The admin on the site you're connecting to wants this conversation to be private but it hasn't been proven that he has 200 bucks for us to say he's cool".
But so what if he's cool? Yeah I like my banking website to be "cool" but for 200 bucks I can be just as "cool". A few years back the browsers started putting extra bling on the URL bar if the coolness factor was high enough - if a bank pays 10,000 bucks for a really cool verification, they get a giant green pulsating URL badge. And they should, that means someone had to fax over vials of blood with the governor's seal that it's a legitimate institute in that state or province. But my little 200 dollar, not pulsating but still green certificate means "yeah digitalsushi definitely had 200 bucks and a fax machine, or at least was hostmaster@digitalsushi.com for damned sure".
And that is good enough for users. No errors? It's legit.
What's the difference between me coughing up 200 bucks to make that URL bar green, and then bright red with klaxons cause I didn't cough up the 200 bucks to be sure I am the owner of a personal domain? Like I said, a racket. The certificate authorities love causing a panic. But don't tell me users are any safer just 'cause I had 200 bucks. They're not.
The cert is just for warm and fuzzies. The encryption is to keep snoops out. If I made a browser, I would have 200 dollar "hostmaster" verification be some orange, cautious URL bar - "this person has a site that we have verified to the laziest extent possible without getting sued for not even doing anything at all". But then I probably wouldn't be getting any tips in my jar from the CAs at the end of the day.
> A self signed certificate warning means "Warning! The admin on the site you're connecting to wants this conversation to be private but it hasn't been proven that he has 200 bucks for us to say he's cool"
no. It means "even though this connection is encrypted, there is no way to tell you whether you are currently talking to that site or to NSA which is forwarding all of your traffic to the site you're on".
Treating this as a grave error IMHO is right because by accepting the connection over SSL, you state that the conversation between the user agent and the server is meant to be private.
Unfortunately, there is no way to guarantee that to be true if the identity of the server certificate can't somehow be tied to the identity of the server.
So when you accept the connection unencrypted, you tell the user agent "hey - everything is ok here - I don't care about this conversation to be private", so no error message is shown.
But the moment you accept the connection over ssl, the user agent assumes the connection to be intended to be private and failure to assert identity becomes a terminal issue.
This doesn't mean that the CA way of doing things is the right way - far from it. It's just the best that we currently have.
The solution is absolutely not to have browsers accept self-signed certificates though. The solution is something nobody hasn't quite come up with.
The solution is something nobody hasn't quite come up with.
SSH has. It tells me:
WARNING, You are connecting to this site (fi:ng:er:pr:in:t) for the first time. Do your homework now. IF you deem it trustworthy right now then I will never bother you again UNLESS someone tries to impersonate it in the future.
That model isn't perfect either but it is much preferable over the model that we currently have, which is: Blindly trust everyone who manages to exert control over any one of the 200+ "Certificate Authorities" that someone chose to bake into my browser.
13 replies →
So this is where we stand:
I think there's a pretty blatant antipattern here, and I'm not talking about colourblind-proofing the browser chrome.
36 replies →
> It's just the best that we currently have.
No, I wouldn't say so. Having SSL is better than having nothing pretty much on any site. But if you don't want to pay $200 somebody for nothing, you would probably consider using http by default on your site, because it just looks "safer" to the user that knows nothing about cryptography because of how browsers behave. Which is nonsense. It's worse than nothing.
And CA are not "authorities" at all. They could lie to you, they could be compromised. Of course, the fact that this certificate has been confirmed by "somebody" makes it a little more reliable than if it never was confirmed by anyone at all, but these "somebodies", CA, don't have any control over the situation, it's just some guys that came up with idea to make money like that early enough. You are as good CA as Symantec is, you can just start selling certificates and it would be the same — except, well, you are just some guy, so browsers wouldn't accept these certificates so it's worth nothing. It's all just about trust, and I'm not sure I trust Symantec more than I trust you. (And I don't mean I actually trust you, by the way.)
For everyone else it's not really about SSL, security and CAs, it's just about how popular browsers behave.
So, no, monopolies existing for the reason they are allowed to do something are never good. Only if they do it for free.
1 reply →
There's no question in my mind that the whole thing is a racket and militates against security (you generally don't even know all the evil organisations that your browser implicitly trusts - and all the organisations that they trust etc).
There are certainly other options too: here's my suggestion-
The first time you go to a site where the certificate is one you haven't seen before, the browser should show a nice friendly page that doesn't make a fuss about how dangerous it is, and shows a fingerprint image for the site that you can verify elsewhere, either from a mail you've been sent, and with a list of images from fingerprint servers it knows about that contain a record for that site shown next to it.
Once you accept, it should store that certificate and allow you access to that site without making a big fuss or making it look like it's less secure than an unencrypted site. This should be a relatively normal flow and we should make the user experience accessible to normal people.
It's basically what we do for ssh connections to new hosts.
8 replies →
"Treating this as a grave error IMHO is right because by accepting the connection over SSL, you state that the conversation between the user agent and the server is meant to be private."
This is misguided thinking, pure and simple. Because of this line of thinking, your everyday webmaster has been convinced that encrypting data on a regular basis is more trouble than it's worth and allowed NSA (or the Chinese or the Iranian or what have you) authorities to simply put in a tap to slurp the entire internet without even going through the trouble of targeting and impersonating. Basically, this is the thinking that has enabled dragnet surveillance of the internet with such ease.
8 replies →
> no. It means "even though this connection is encrypted, there is no way to tell you whether you are currently talking to that site or to NSA which is forwarding all of your traffic to the site you're on".
That would be correct if you could assume that the NSA couldn't fake certificates for websites. But it can, so it's wrong and misleading. It's certificate pinning, notary systems etc. that actually give some credibility to the certificate you're currently using, not whatever the browsers indicate as default.
FWIW, (valid) rogue certificates have been found in the wild several times, CAs have been compromised etc. ...
26 replies →
Browsers shouldn't silently accept self-signed, but there is a class of servers where self-signed is the best we've got: connecting to embedded devices. If I want to talk to the new printer or fridge I got over the web, they have no way of establishing trust besides Tacking my first request to them.
13 replies →
>So when you accept the connection unencrypted, you tell the user agent "hey - everything is ok here - I don't care about this conversation to be private", so no error message is shown.
Maybe a security-conscious person thinks that, but the typical user does not knowingly choose http over https, and thus the danger of MitM and (unaccepted) snooping is at least as large for the former.
So it's somewhat debatable why we'd warn users that "hey, someone might be reading this and impersonating the site" for self-signed https but not http.
The use case for the CA system is to prevent conventional criminal activity -- not state-level spying or lawful intercept. The $200 is just a paper trail that links the website to either a purchase transaction or some sort of communication detail.
The self-signed cert risk has nothing to do with the NSA... if it's your cert or a known cert, you add it to the trust store, otherwise, you don't.
Private to the NSA and reasonably private to the person sitting next to you are different use cases. The current model is "I'm sorry, we can't make this secure against the NSA and professional burglars so we're going to make it difficult to be reasonably private to others on the network".
It's as if a building manager, scared that small amounts of sound can leak through a door, decided that the only solution is to nail all the office doors open and require you to sign a form in triplicate that you are aware the door is not completely soundproof before you are allowed to close it to make a phone call. (Or jump through a long registration process to have someone come and install a heavy steel soundproofed door which will require replacement every 12 months.)
After all, if you're closing the door, it's clearly meant to be private. And if we can't guarantee complete security against sound leaks to people holding their ear to a glass on the other side, surely you mustn't be allowed to have a door.
1 reply →
Self-signed certificates are still better than http plain text. I understand not showing the padlock icon for self-signed certificates, I don't understand why you would warn people away from them when the worst case is that they are just as unsafe as when they use plain http. IMHO this browser behavior is completely nonsensical.
29 replies →
Here's one thing that's NOT the solution: throwing out all encryption entirely. Secure vs insecurse is a gradient. The information that you're now talking to the same entity as you were when you first viewed the site is valuable. For example it means that you can be sure that you're talking to the real site when you log in to it on a public wifi, provided you have visited that site before. In fact, I trust a site that's still the same entity as when I first visited it a whole lot more than a site with a new certificate signed by some random CA. In practice the security added by CAs is negligible, so it makes no sense to disable/enable encryption based on that.
Certificates don't even solve the problem they attempt to solve, because in practices there are too many weaknesses in the chain. When you first downloaded firefox/chrome, who knows that the NSA didn't tamper with the CA list? (not that they'd need to)
Moxie Marlinspike's Perspectives addon for Firefox was a good attempt to resolve some of the problems with self-signed certs.
Unfortunately, no browsers adopted the project, and it is no longer compatible with Firefox. There are a couple forks which are still in development, but they are pretty underdeveloped.
I wonder if Mozilla would be more likely to accept this kind of project into Firefox today, compared to ~4 years ago when it was first released, now that privacy and security may be more important topic to the users of the browser.
The solution, at least for something decentralized, seems to be a web of trust established by multiple other identities signing your public key with some assumption of assurance that they have a reasonable belief that your actual identity is in fact represented by that public key.
That's what PGP/GPG people seem to do, anyway.
Why can't I get my personally-generated cert signed by X other people who vouch for its authenticity?
> no. It means "even though this connection is encrypted, there is no way to tell you whether you are currently talking to that site or to NSA which is forwarding all of your traffic to the site you're on".
Well... that's true regardless, as the NSA almost certainly has control over one or more certificate authorities.
But I agree with the sentiment. :)
It's interesting that your boogeyman in the NSA and not scammers. I think scammers are 1000X more likely. Escpecially since the NSA can just see the decrypted traffic from behind the firewall. There's no technology solution for voluntarily leaving the backdoor open.
> or to NSA which
Nah. The NSA, or any adversary remotely approaching them in resources, has the ability to generate certificates that are on your browser's trust chain. Self-signed and unknown-CA warnings suggest that a much lower level attacker may be interfering.
Just a small nitpick: I'm pretty sure the NSA has access to a CA to make it look legit.
> The solution is absolutely not to have browsers accept self-signed certificates though. The solution is something nobody hasn't quite come up with.
We do have a solution that does accept self-signed certificates. The remaining pieces need to be finished and the players need to come together though:
https://github.com/okTurtles/dnschain
If you're in San Francisco, come to the SF Bitcoin Meetup, I'll be speaking on this topic tonight:
http://www.meetup.com/San-Francisco-Bitcoin-Social/events/18...
Let's Encrypt seems like the right "next step", but we still need to address the man-in-the-middle problem with HTTPS, and that is something the blockchain will solve.
I totally agree that CAs are a racket. There's zero competition in that market and the gate-keepers (Microsoft, Mozilla, Apple, and Google) keep it that way (mostly Microsoft however).
That being said: Identity verification is important as the encryption is worthless if you can be trivially man-in-the-middled. All encryption assures is that two end points can only read communications between one another, it makes no assurances that the two end points are who they claim to be.
So verification is a legitimate requirement and it does have a legitimate cost. The problem is the LOWEST barriers to entry are set too high, this has become a particular problem when insecure WiFi is so common and even "basic" web-sites really need HTTPS (e.g. this one).
It is not a legitimate requirement.
HTTP can be man-in-the-middled passively, and without detection; making dragnets super easy.
In order for HTTPS self signed certs to be effectively man-in-the-middled the attacker needs to be careful to only selectively MITM because if the attacker does it indiscriminately clients can record what public key was used. The content provider can have a process that sits on top of a VPN / Tor that periodically requests a resource from the server and if it detects that the service is being MITM then it can shut down the service and a certificate authority can be brought in.
Edit: Also, all this BS about how HTTPS implies security is besides the grandparent's point: certificates and encryption are currently conflated to the great detriment of security, and they need not be.
7 replies →
That's the standard motivation for CAs, but I don't buy it.
Most of the time, I'm much more interested in a domain identity than a corporate identity. If I go to bigbank.com, and is presented with a certificate, I want to know if I am talking to bigbank.com -- not that I'm talking to "Big Bank Co." (or at least one of the legal entities around the world under that name).
Therefore it would make much more sense if your TLD made a cryptographic assertment that you are the legal owner of a domain and that this information could be utilized up the whole protocol stack.
That would not have a legitimate cost, apart from the domain name system itself.
Without some kind of authentication, the encryption TLS offers provides no meaningful security. It might as well be an elaborate compression scheme. The only "security" derived from unauthenticated TLS presumes that attackers can't see the first few packets of a session. But of course, real attackers trivially see all the the traffic for a session, because they snare attackers with routing, DNS, and layer 2 redirection.
What's especially baffling about self-signed certificate advocacy is the implied threat model. Low- and mid-level network attackers and crime syndicates can't compromise a CA. Every nation state can, of course (so long as the site in question isn't public-key-pinned). But nation states are also uniquely capable of MITMing connections!
>The only "security" derived from unauthenticated TLS presumes that attackers can't see the first few packets of a session
Could you elaborate here? With a self-signed cert, the server is still not sending secret information in the first few packets; it just tells you (without authentication) which public key to use to encrypt the later packets (well, the public key to encrypt the private key for later encryption).
The threat model would be eavesdroppers who can't control the channel, only look. Using the SS cert would be better than an unencrypted connection, though still shouldn't be represented as being as secure as full TLS. As it stands, the server is either forced to wait to get the cert, or serve unencrypted such that all attackers can see.
6 replies →
I'm not entirely sure I understand your point, so if I misunderstood you please correct me.
First, TLS has three principles that, if you lose one, it becomes essentially uselsss:
1) Authentication - you're talking to the right server
2) Encryption - nobody saw what was sent
3) Verification - nothing was modified in transit
Without authentication, you essentially are not protected against anything. Any router, any government can generate a cert for any server or hostname.
Perhaps you don't think EV certs have a purpose - personally, I think they're helpful to ensure that even if someone hijacks a domain they cannot issue an EV cert. Luckily, the cost of certificates is going down over time (usually you can get the certs you mentioned at $10/$150). That's what my startup (https://certly.io) is trying to help people get, cheap and trusted certificates (sorry for the promotion here)
Encryption without verification is not useless; it protects against snooping.
6 replies →
The warning pages are really ridiculous. Why doesn't every HTTP page show a warning you have to click through?
But it's not like MITM attacks are not real. CAs don't realistically do a thing about them, but it is true that you can't trust that your connection is private based on TLS alone. (unless you're doing certificate pinning or you have some other solution).
You're absolutely right. From first principles, HTTP should have a louder warning than self-signed HTTPS.
Our hope is that Let's Encrypt will reduce the barriers to CA-signed HTTPS sufficiently, that it will become realistic for browsers to show warning indicators on HTTP.
If they did that today, millions of sites would complain, "why are you forcing us to pay money to CAs, and deal with the incredible headache of cert installation and management?". With Let's Encrypt, the browsers can point to a simple, single-command solution.
2 replies →
Because HTTP does not imply security, HTTPS does. Without proper certificates, these guarantees are diluted; hence the warnings.
Why doesn't every HTTP page show a warning you have to click through?
Back in the Netscape days, it did. People got tired of clicking OK every time they searched for something.
Eventually maybe the browsers will do that. Currently far too many websites are HTTP-only to allow for that behavior, but if that changes and the vast majority of the web is over SSL it would make sense to start warning for HTTP connections. That would further reduce the practicality of SSL stripping attacks.
It's not enough to keep the snoops out - you need to KNOW you're keeping the snoops out. That's what SSL helps with. A certificate is just a key issued by a public (aka trusted) authority. Sites can also choose to verify the certificate: if this is done, even if a 3rd party can procure a fake cert, if they don't have the same cert the web server uses, they can't snoop the traffic.
Site: Here's my public key. Use it to verify that anything I sent you came from me. But don't take my word for it, verify it against a set of trusted authorities pre-installed on your machine.
Browser: Ok, your cert checks out. Here's my public key. You can use it for the same.
Site: Ok, now I need you to reply this message with the entire certificate chain you have for me to make sure a 3rd party didn't install a root cert and inject keys between us. Encrypt it with both your private key and my public key.
Browser: Ok, here it is: ASDSDFDFSDFDSFSD.
Site: That checks out. Ok, now you can talk to me.
This is what certificates help with. There are verification standards that apply, and all the certificate authorities have to agree to follow these standards when issuing certain types of SSL certificates. The most stringent, the "Green bar" with the entity name, often require verification through multiple means, including bank accounts. Certificate authorities that fail to verify properly can have their issuing privileges revoked (though this is hard to do in practice, it can be done).
Here's some comparison screenshots of the "bling" that is being described (hard to even tell that some of these sites are SSL'd without getting the EV)
https://www.expeditedssl.com/pages/visual-security-browser-s...
I'm pissed off 'cos I'm on the board for rationalwiki.org and we have to pay a friggin' fortune to get the shiny green address bar ... because end users actually care, even as we know precisely what snake oil the whole SSL racket is. Gah.
I'm all for CAs to burn in a special hell. The other cost, though, was always getting a unique IP. Is that still a thing? Has someone figured out multiple certificates for different domains on the same IP? Weren't we running out of IPv4 at some point?
Yes, there are two main mechanisms, each with its own limitations.
https://en.wikipedia.org/wiki/SubjectAltName https://en.wikipedia.org/wiki/Server_Name_Indication
The thing is, without a chain of trust, the self-signed certificate might be from you or it might be from the "snoops" themselves. Certificates that don't contain any identifying information are vulnerable to man-in-the-middle attacks.
I have some certificates through RapidSSL, and when they send me reminders to renew, the e-mails come with this warning:
"Your certificate is due to expire.
If your certificate expires, your site will no longer be encrypted."
Just blatantly false.
They might as well say something even more ominous: "If your certificate expires, your site will no longer be accessible."
Of course, we know that's not true either, but try explaining to your visitors how to bypass the security warning (newer browsers sure don't make it obvious, even if you know to look for it).
I just bought a cert on Saturday for $9. It's less than the domain name.
$9 is a big step up from free, which is what the rest of my blog costs.
1 reply →
Can get them free for web use. Not sure where he is coming from.
1 reply →
> 200 bucks for us to say he's cool
There are trusted free certificates as well, like the ones from StartSSL.
> if a bank pays 10,000 bucks for a really cool verification, they get a giant green pulsating URL badge
Yeah, $10 000 and legal documentation proving that they are exactly the same legal entity as the one stated on the certificated. All verified by a provider that's been deemed trustworthy by your browser's developers.
Finally, if a certificate is self-signed, it generally should be a large warning to most users: the certificate was made by an unknown entity, and anybody may be intercepting the comunication. Power-users understand when self-signed CAs are used, but they don't get scared of red warnings either, so that's not an issue.
This certificate industry has been such a racket. It's not even tacit that there are two completely separate issues that certificates and encryption solve. They get conflated and non technical users rightly get confused about which thing is trying to solve a problem they aren't sure why they have.
But a man-in-the-middle attack will remove any secrecy encryption provides and to prevent that, we require certificate authorities to perform some minimal checks that public keys delivered to your browser are indeed the correct ones.
You've got a point about how warnings are pushing incentives towards more verification, but they serve a purpose that aligns with secrecy of communication.
Wasn't WOT (Web Of Trust) supposed to fix this? Basically, I get other people to sign my public key asserting that it's actually me and not someone else, and if enough people do that it's considered "trusted", but in a decentralized fashion that's not tied to "authorities"?
no it means a trusted third party has not verifed who you are connecting to is who he/she says they are
Perhaps you should understand a system before slandering it? As others have said, encryption without authentication is useless.
Running a CA has an associated cost, including maintenance, security, etc. That's what you pay for when you acquire a certificate. Whether current market prices' markup is too high would be a different question, but paying for a certificate is definitely not spending 200$ to look cool.
CAs are the best known way (at the moment) to authenticate through insecure channels (before anyone brings pìnned certs or WoT, read this comment of mine: https://news.ycombinator.com/item?id=8616766)
EDIT: You can downvote all you want but I'm still right. Excuse my tone, but slandering a system without an intimate understanding of the "how"s and the "why"s (i.e. spreading FUD) hurts everyone in the long run.
That's the third comment of yours in which I've seen you taunt downvoters via edits in this thread alone. That's why I'm downvoting you. Knock it off, please.
2 replies →
This is awesome! It looks like what CACert.org set out to be, except this time instead of developing the CA first and then seeking certification (which has been a problem due to the insanely expensive audit process), but the EFF got the vendors on board first and then started doing the nuts and bolts.
This is huge if it takes off. The CA PKI will no longer be a scam anymore!!
I'd trust the EFF/Mozilla over a random for profit "security corporation" like VeriSign any day of the week and twice on Sunday to be good stewards of the infrastructure.
I don't see how this actually keeps the CA PKI from being a scam. While I personally trust the EFF & Mozilla right now, as long as I can't meaningfully revoke that trust, it's not really trust and the system is still broken.
You can revoke your trust in any CA at any time, you don't even need to see any errors! Just click the little padlock each time you visit a secure website and see if the CA is in your good books. If it's not, pretend the padlock isn't there!
OK, that's a little awkward. A browser extension could automate this. But in practice, nobody wants to do this, because hardly anyone has opinions on particular CAs. It's a sort of meta-opinion - some people feel strongly they should be able to feel strongly about CAs, but hardly anyone actually does. So nobody uses such browser extensions.
9 replies →
The EFF has a bad track record in this area. The last time they tried something to identify web sites, it was TRUSTe, a nonprofit set up by the EFF and headed by EFF's director. Then TRUSTe was spun off as a for-profit private company, reduced their standards, stopped publishing enforcement actions, and became a scam operation. The Federal Trade Commission just fined them: "TRUSTe Settles FTC Charges it Deceived Consumers Through Its Privacy Seal Program Company Failed to Conduct Annual Recertifications, Facilitated Misrepresentation as Non-Profit" (http://www.ftc.gov/news-events/press-releases/2014/11/truste...) So an EFF-based scheme for a new trusted nonprofit has to be viewed sceptically.
This new SSL scheme is mostly security theater. There's no particular reason to encrypt traffic to most web pages. Anyone with access to the connection can tell what site you're talking to. If it's public static content, what is SSL protecting? Unless there's a login mechanism and non-public pages, SSL isn't protecting much.
The downside of SSL everywhere is weak SSL everywhere. Cloudflare sells security theater encryption now. All their offerings involve Cloudflare acting as a man-in-the-middle, with everything decrypted at Cloudflare. (Cloudflare's CEO is fighting interception demands in court and in the press, which indicates they get such requests. Cloudflare is honest about what they're doing; the certificates they use say "Cloudflare, Inc.", so they identify themselves as a man-in-the-middle. They're not bad guys.)
If you try to encrypt everything, the high-volume cacheable stuff that doesn't need security but does need a big content delivery network (think Flickr) has to be encrypted. So the content-delivery network needs to impersonate the end site and becomes a point of attack. There are known attacks on CDNs; anybody using multi-domain SSL certs with unrelated domains (36,000 Cloudflare sites alone) is vulnerable if any site on the cert can be broken into. If the site's logins go through the same mechanism, security is weaker than if only the important pages were encrypted.
You're better off having a small secure site like "secure.example.com" for checkout and payment, preferably with an Extended Validation SSL certificate, a unique IP address, and a dedicated server. There's no reason to encrypt your public product catalog pages. Leave them on "example.com" unencrypted.
Regarding your first paragraph, I agree: all CAs need continuing scrutiny. Certificate Transparency, for example.
Regarding the rest of your post, however, I'm calling bullshit. You give very bad advice. Deploy TLS on every website. Deploy HTTP Strict-Transport-Security wherever you can.
The sites people visit are confidential, and yes, are not protected enough at the moment. (That will eventually improve, piece by piece.) That's absolutely no excuse at all for you not protecting data about the pages they're on or the specific things they're looking at, even if your site is static, or not protecting the integrity of your site. You have no excuse for that. Go do it.
Your other big problem is thinking that anything on your domain "doesn't need security"! Yes it does - unless you actually desire your website to be co-opted for use in malware planting by Nation-State Adversaries with access to Hacking Team(s) (~cough~) - or the insecure parts of your website being injected by a middleman with malicious JavaScript or someone else's "secure" login page that's http: with a lock favicon. (I have seen this in the wild, yes.) If you've deployed a site with that bad advice, it could be exploited like that today: go back and encrypt it properly before someone hacks your customers. This is why HSTS exists. Use it.
Regarding your CDN point, kindly cite - or demonstrate - your working "known attack" against Cloudflare's deployment?
kindly cite
Black Hat 2009, "Why TLS Keeps Failing to Protect", Moxy Marlinspike, slide 42: https://www.blackhat.com/docs/us-14/materials/us-14-Delignat...
Basic concept: 1) find target site A with shared SSL cert. Cloudflare gets shared SSL certs with 50+ unrelated domains. 2) find vulnerable server B in a domain on same cert. (Probably Wordpress.) 3) attack server B, inserting fake copy of important pages on site A with attack on client or password/credit card interception. 4) use DNS poisoning attack to redirect A to B.
All it takes is one vulnerable site out of the 50+ on the same cert.
The whole shared-cert thing is a workaround for Windows XP. Cloudflare does it because they're still trying to support IE6 on Windows XP, which doesn't speak Server Name Identification, and they don't have enough IPv4 addresses to have one per customer.
1 reply →
>If it's public static content, what is SSL protecting?
Plenty.
Off the top of my head:
It protects people/companies from having their reputations ruined by a MITM attack that replaces content on their site with something offensive.
It protects sensitive/important content on sites from being tampered with by an attacker. For example, if I am hosting a binary for download I can make a signature available for that binary on my site. In order for the signature to serve its purpose the user needs to be sure it hasn't been modified en route.
> Anyone with access to the connection can tell what site you're talking to.
HTTPS encrypts the URL paths you access [1]. Would you rather an adversary knew which IPs you visited, or the IPs plus the URLs?
[1] http://stackoverflow.com/questions/499591/are-https-urls-enc...
> If it's public static content, what is SSL protecting?
Comcast was recently caught injecting self-promotional ads via JavaScript injection. Sites using HTTPS are immune from this sort of attack. [1]
[1] http://arstechnica.com/tech-policy/2014/09/why-comcasts-java...
TLS gives you authenticity and secrecy; those seem like useful defaults, and in 2014, I think the question should be "how?" rather than "why?" It seems this project aims to address some of the process headaches and cost barriers that currently deter some from using TLS by default.
I do think behind-the-CDN interception, in-front-of-the-CDN compromises, and weak CDN crypto are all serious concerns. I won't name any names here, but the employment histories of major CDNs' security team members definitely deserve closer scrutiny by civil society groups and reporters, especially those interested in fighting mass surveillance.
But overall, I think it's important to respect the privacy and security of users first, and work toward solving the engineering problems that need to be solved in order to affirm that commitment to users, as these folks have tried to do.
> If it's public static content, what is SSL protecting?
In this case, SSL protects against MITM attacks. If a customer goes to the unencrypted "example.com" site and gets a bunch of ads for porn, it will give the customer a negative impression of the company. All it would take is a few pitchfork-wielding high-profile twitter accounts to cause a PR nightmare. Even if the cause is a hacked coffee shop wireless access point, it may be hard to restore public opinion.
That scenario is a long-shot, but in my opinion, the potential negative consequences outweigh the time and energy required to set up SSL (especially since a basic SSL certificate is free).
> especially since a basic SSL certificate is free
From where? StartSSL only gives out free certs to individuals. For my company, they've actually required me to get organizational validation in the past, which wasn't cheap ($200, IIRC—$100 for the organizational validation, plus $100 for stage 2 personal validation, which also required me to upload images of my driver's license and passport).
3 replies →
> If it's public static content, what is SSL protecting?
https:// helps protect the act of participation and deters the building of dossiers.
Its the difference between the books in the library and the list of books in the library you have read.
Since SSL doesn't hide the length of the encrypted document, an attacker can make a good guess as to what public static content is being read.
1 reply →
> There's no particular reason to encrypt traffic to most web pages
how about the SPDY protocol and the faster speed it offers?
> There's no reason to encrypt your public product catalog pages. Leave them on "example.com" unencrypted.
Of course this is true in theory, but in practice, both clients and customers get 'warm fuzzies' from seeing that green lock in the URL window.
It let's them 'know' that the company they are dealing with is at least somewhat reputable. Whether this is true or not doesn't matter; it is the perception many people have, and it does affect sales numbers in the real world.
i think the realpolitik/"not really caring about users" rationale is more "when someone MITMs the person browsing your company's catalog, it still makes your company look bad". and in my opinion, it should.
Looking at the spec [0] I'm concerned about the section on 'Recovery Tokens'.
"A recovery token is a fallback authentication mechanism. In the event that a client loses all other state, including authorized key pairs and key pairs bound to certificates, the client can use the recovery token to prove that it was previously authorized for the identifier in question.
This mechanism is necessary because once an ACME server has issued an Authorization Key for a given identifier, that identifier enters a higher-security state, at least with respect the ACME server. That state exists to protect against attacks such as DNS hijacking and router compromise which tend to inherently defeat all forms of Domain Validation. So once a domain has begun using ACME, new DV-only authorization will not be performed without proof of continuity via possession of an Authorized Private Key or potentially a Subject Private Key for that domain."
Does that mean, if for instance, someone used an ACME server to issue a certificate for that domain in the past, but then the domain registration expired, and someone else legitimately bought the domain later, they would be unable to use that ACME server for issuing an SSL certificate?
[0] https://github.com/letsencrypt/acme-spec/blob/master/draft-b...
This is a question about the policy layer of the CA using the ACME protocol.
The previous issuing CA should have revoked the cert they issued when the domain was transferred. But a CA speaking the ACME protocol might choose to look at whois and DNS for additional information to decide whether it issues different challenges in response to a certification request.
It's possible that this question shouldn't be decided one way or another in the specification, since it will ultimately be more a matter of CA policy about how the CA wants to handle automated issuance and risks.
I suppose they could check WHOIS at a regular interval to check whether a domain secured by one of their certs has expired, and update the state of the ACME server accordingly?
Free CA? This is cool. Why this wasn't done a long time ago is beyond me. (Also please support wildcard certs)
An interesting thing happened at a meet-up at Square last year. Someone from google's security team came out and demonstrated what google does to notify a user that a page has been compromised or is a known malicious attack site.
During the presentation she was chatting about how people don't really pay attention to the certificate problems a site has, and how they were trying to change that through alerts/notifications.
After which someone asked that if google cared so much about security why didn't they just become a CA and sign certs for everyone. She didn't answer the question, so I'm not sure if that means they don't want to, or they are planning to.
What privacy concerns should we have if someone like goog were to sign the certs? What happens if a CA is compromised?
It wasn't done a long time ago because running a CA costs money (which is why they charge for certificates), so whoever signs up to run one is signing up for a money sink with no prospect of direct ROI, potentially for a loooooong time. This new CA is to be run by a non-profit that uses corporate sponsorship rather than being supported by the market; whether that's actually a better model in the long run is I suppose an open question. But lots of other bits of internet infrastructure are funded this way, so perhaps it's no big deal.
There aren't a whole lot of privacy concerns with CA's as long as you use OCSP stapling, so users browsers aren't hitting up the CA each time they visit a website (Chrome never does this but other browsers can do).
Re: CA compromise. One reason running a CA costs money is that the root store policies imposed by the CA/Browser Forum require (I think!) the usage of a hardware security module which holds the signing keys. This means a compromised CA could issue a bunch of certs for as long as the compromise is active, but in theory it should be hard or impossible to steal the key. Once the hackers are booted out of the CA's network, it goes back to being secure. Of course quite some damage can be done during this time, and that's what things like Certificate Transparency are meant to mediate - they let everyone see what CAs are doing.
> imposed by the CA/Browser Forum require (I think!)
That's something imposed by the audit criteria (WebTrust/ETSI). What you detailed is also why roots are left disconnected from the internet - if you compromise an intermediary, that can be blacklisted as opposed to the entire root.
I'm curious. Whats the biggest cost in running a CA? As in, what makes those certs so expensive?
3 replies →
> Why this wasn't done a long time ago is beyond me.
While probably not officially scriptable, free certificates have been available since a long time ago: https://www.startssl.com/?app=1
Also, no free wildcard certs. Which I really want.
> What happens if a CA is compromised?
Looking at past compromises, if they have been very irresponsible they are delisted from the browsers' list of trusted roots (see diginotar). If they have not been extremely irresponsible, then they seem to be able to continue to function (see Comodo).
https://en.wikipedia.org/wiki/DigiNotar#Refusal_to_publish_r... https://blogs.comodo.com/uncategorized/the-recent-ra-comprom...
I'll run a free CA right now. Who wants a cert for microsoft.com?
NB: This is a bit unfair, because the existing for-money CAs haven't always stopped someone from registering microsoft.com.
You raise a good point though, SSL/TLS Certs are trying to deal with two separate problems:
1. Over the wire encryption (which this handles)
2. As a bad, but the best we've got site identification system for stopping phishing mechanism.
Currently, for even the cheapest certs (domain+email validated) - the CAs will reject SSL cert requests for anything that might be a phishing target. Detecting "wellsfargo.com" is pretty easy, where it gets tricky is things like "wellsforgo.com", "wellsfàrgo.com" etc. Which if I'm looking at this right will just sail through with LetsEncrypt.
I suspect we're going to actually end up with two tiers of SSL certs as the browser makers have started to really de-emphasize domain validated certs [1] like this vs the Extended Validation (really expensive) certs, to the point where in most cases now having a domain cert does not know green (and maybe doesn't even show a lock) at all.
As a side note, Google had announced that they were going to start using SSL as a ranking signal [2] (sites with SSL would get a slight bump in rankings), from this perspective the "high" cost of a cert was actually a feature as it made life much more expensive on blackhat SEOs who routinely are setting up hundreds of sites.
1 - Screenshots: https://www.expeditedssl.com/pages/visual-security-browser-s...
2 - http://googlewebmastercentral.blogspot.com/2014/08/https-as-...
If you can make microsoft.com serve up the correct challenge response, you'll be able to get a cert for them issued by the this project. This isn't a pure rubber-stamping service.
1 reply →
> Free CA? This is cool. Why this wasn't done a long time ago is beyond me. (Also please support wildcard certs)
There have been previous attempts, e.g. http://www.cacert.org/
AFAIK they failed in the politics front (getting accepted in mainstream browsers). Sounds like EFF might have better leverage.
I think the issue of whether or not there should be a wide new industry borne on the back of the CA architecture, its all a bit of a red-herring, anyway. This is only security at the web browser: do we trust our OS vendors to be CA's, too? If so, then I think we may see a cascade/avalanche of new CA's being constructed around the notion of the distribution. I know for sure, even if I have all the S's in the HTTP in order, my machine itself is still a real weak point. When, out of the box, the OS is capable of building its own certified binaries and adding/denying capabilities of its build products, inherently, then we'll have an interesting security environment. This browser-centric focus of encryption is but the beachhead for more broader issues to come, methinks; do you really trust your OS vendor? Really?
If each domain name can get a non-wildcard cert for free, quickly, why do you need wildcard certs? For multi-subdomain hosting on one server? Just wondering.
For my previous use cases, it's ideal for dynamically created subdomains of a web application. If I know ahead of time, it's easy to grab a cert for any subdomain. However if a user is creating subdomains for a custom site or something similar, it's much nicer/easier to have the wildcard cert.
2 replies →
Lots of services create dynamic subdomains in the form of "username.domain.com". To offer SSL on those domains without a wildcard certificate, you'd need to obtain a new certificate and a new IPv4 address every time a user signs up. You also need to update configuration and restart the web server process.
3 replies →
Google is a CA, and they sign their own certs as "Google Internet Authority G2" under SHA fingerprint BB DC E1 3E 9D 53 7A 52 29 91 5C B1 23 C7 AA B0 A8 55 E7 98.
They're subordinate under another CA (GlobalSign), and presumably contractually obligated to only sign their own certs. GlobalSign offers the following service to anyone willing to pay the sizable fee, undergo a sizable audit, comply by the CA/Browser forum rules, and only issue certs to themselves:
https://www.globalsign.com/certificate-authority-root-signin...
There are a few other vendors that I've seen offer similar services.
I couldn't be happier about the news, the EFF and Mozilla always had a special place in my heart. However, the fact that we have to wait for our free certificates until the accompanying command line tool is ready for prime time seems unnecessary. Another thing I'm interested in is whether they provide advanced features like wildcard certificates. This is usually the kind of thing CA's charge somewhat significant amounts of money for.
The thing that's causing the delay is not the client software development, it's the need to create the CA infrastructure and then perform a WebTrust audit. If we were ready on the CA side to begin issuing certificates today, we would be issuing them today.
I think I may have misunderstood all of you. Is the audit process itself really that time consuming? I can imagine the amounts of bureaucracy involved, but I can't image this takes much longer than, say, a month or so. Most of the time is probably spent waiting for someone or something, right? I mean we're talking about very capable people here who have done this kind of thing before.
3 replies →
I doubt the actual CA has been setup either. They're setting up their own root while cross signing from IdenTrust, that's not a one day activity. Auditors have to be present, software has to be designed and tested, etc.
It's true that this shouldn't be done in a day, but it's trivial compared to building a command line tool that automatically configures HTTP servers and designing an open protocol that issues and renews certificates. This is especially true if one of your partners is a CA.
---
Let me be clear here: I'm not complaining that I don't get my free cake now. I do think however that most people at the EFF and Mozilla would agree that we needed something like this a couple of years ago. In that context I think it's a least noteworthy that they decided to wait until other parts of the system are ready.
1 reply →
So, one CA to rule then all?
There's a scenario (simplified for illustration, but entirely possible) that's normally not a huge risk because there are many CAs, and they are private, for-profit companies that have an economic incentive to protect you and your certificate's ability to assure end users that a conversation's privacy won't be compromised.
1) browser requests site via SSL
2) MITM says, "let's chat - here's my cert"
3) browser asks, "is this cert legit for this domain?"
4) MITM says, "yes, CA gave us this, because of FISA, to give to you as proof"
5) browser says, "ok, let's chat"
I'm not trying to spread FUD, but if you're NSA and you've been asking CAs for their master keys for years, doesn't a single CA sound great (free and easy == market consolidation), and doesn't EFF seem like the perfect vector for a Trojan horse like this, given its popularity and trust among hacker types gained in recent years?
We will look for ways to mitigate the risk of misissuing for any reason, including because someone tries to coerce us to misissue. One approach to this that's interesting is Certificate Transparency.
http://www.certificate-transparency.org/
There's also HPKP, TACK, and DANE, plus the prospect of having more distributed cert scans producing databases of all the publicly visible certs that people are encountering on the web.
DANE is the way to go forward. Have your TLD CA sign your domain key and sign your web certificates with your own key.
Only one "root CA" to trust per TLD, and it's free if you own a TLD that supports DNSSEC (most do these days).
Now we just need the DANE check built into the browser without any plugins that require installation.
I'm not sure I follow that line of reasoning. Each CA is independently and completely able to issue certificates (not counting EV, but let's leave that out). There are hundreds of CAs. Depending on your trust store, some of them are literally owned by the US Department of Defense. Others are owned by the Chinese government.
How does having _fewer_ CAs make anything easier? Why is the EFF a better route than any of the various other companies that have gotten themselves in the CA program? And given that all the CAs are equivalently trusted at a technical level, why does the human trust afforded the EFF affect whether it's a better target?
This is not an attempt to reduce the CA system to a single CA. The intent here is to provide a simple and free way for anyone to get basic DV certs. If we can also contribute to CA best practices, and help improve the CA system in general, we'd like to do that too.
Let's Encrypt is only planning to issue DV certificates, since that is the only type that can be issued in a fully automated way. Many organizations will want something other than DV, and they'll have to get such certs from other CAs.
Also, our software and protocols are open so that other CAs can make use of them.
Let's Encrypt is going to publish records of everything it signs, either with Certificate Transparency or some other mechanism.
Browsers will be able to check any cert signed by the Let's Encrypt CA against the published list. If there's a discrepancy, that will be immediately detectable.
Out of curiosity, what if MITM says, "include me in this list for IP <user IP>"? If the check is not done in a way that solves the byzantine generals problem, I don't see how this feature provides any more protection, other than one more hoop to jump through.
3 replies →
The NSA (or any other agency) only has to coerce any single CA to cooperate. As long as it's in the standard set shipped with browsers, its certificates are accepted.
And pretty much every major government directly or indirectly controls one or multiple CAs that are in the standard set.
The "How It Works" page, https://letsencrypt.org/howitworks/, has me a bit worried. Anytime I see a __magic__ solution that has you running a single command to solve all your problems I immediately become suspicious at how much thought went into the actual issue.
If I'm running a single web app on a single Ubuntu server using Apache then I'm set! If I'm running multiple web apps across multiple servers using a load balancer, nginx on FreeBSD then...
All the same I'm really looking forward to this coming out, it can be nothing but good that all of these companies are backing this new solution and I'm sure it'll expand and handle these issues as long as a good team is behind it.
What you're seeing today is demos, not the software in its final form. You're also seeing it demo'd with a focus on the most simple usage. There are, and will be, advanced options.
We'll be doing quite a bit of work based on user feedback between now and when we go live. We're well aware that we need to cater to a variety of types of users.
I run Apache httpd, and there's no way I'd let a wizard anywhere near my configuration files or private keys, much less run it on a production server.
I think it's about time for a free CA that is recognized by all clients, but you still need to establish a trust chain to exchange a CSR for a signed certificate. This service needs to be server agnostic. The barrier to adoption isn't configuration, and HTTPS isn't the only thing that uses certificates.
There are lots of different barriers to adoption. With this project we are attacking several of them at the outset, including the cost of obtaining a certificate, and the inconvenience or difficulty of obtaining and installing it for users who don't do that every day.
Because of the open protocol we also aspire to support users with more complex configurations and requirements, who are absolutely welcome and encouraged to write their own implementations of the protocol and integrate with their own existing certificate management and configuration methods. If you think of other barriers to adoption that we can help with, please let us know and we'll try to address them; if you just want our certs for free, please get them and enjoy!
3 replies →
Yes, this will only hit the common small-site case. Hopefully if you're running "multiple web apps across multiple servers using a load balancer" you will have the skill to configure HTTPS properly for that situation, which will probably involve custom configuration on the load balancer. It's not a criticism of something trying to solve the common case, where the common solution up until today is pretty much "just forget about it", that it doesn't work at "cloud scale".
It doesn't seem as magical when you drill down. And if you roll your own nginx or whatever, it'll be less transparent still. But yeah, someone like Ubuntu or Red Hat could enable this on their product that simply.
Domain validation is done through a challenge (issued by a CA) to sign arbitrary data and put on a URL (covered by the domain) the CA can then query. This seems pretty solid. Better then email.
I don't get why they are releasing a command line, instead of just giving us a cert that we can install by ourselves.
Here's the current process:
Generate key, Generate CSR, Send CSR, Receive Certs from CA, Verify ownership, Install certs
Presumably their command line client creates the key, the CSR, sends the CSR, then gets back the certs (at least I'd hope so). I'd be happy to use a vetted command line utility which did that, or even just parts of that process, if I were sure the private key were not transmitted. It's just automating stuff which with current CAs needs to be done manually.
The tool will gather the domains, use the CA API to validate ownership, obtain the certs (which cannot be unilaterally created since they are based on a public/private key pair) and manage their expiry.
That's a bit more then "giving us a cert"
That wouldn't be safe, because then they would have access to your private key and impersonate you. Having you (indirectly via their script) generate the key and submit the public key for signing means your private key never leaves the premises.
1 reply →
It's primarily because of the interactive challenge to prove that you control the domains you're requesting the cert for.
If you want, the client can just give you the cert at the end instead of installing it. In the common case for a user who's not currently comfortable with the process, the client is automating several things -- generating a private key and CSR, proving control of the domain, and installing the key and cert in the server.
1 reply →
Who will handle abuse complaints and revocations of known bad actors? I'd be curious to see who's abuse department will be handling those issues.
What kind of abuse where you thinking about? If the domain is hijacked, you simply repossess the domain and request a new certificate and the old one is revoked.
As in, revoking a cert for a known C&C box, or a confirmed spammer, confirmed box serving an exploitkit, confirmed phishing domain (such as my-apple-ikloud-verify.foo)
Basically, my assumption is they won't want to be providing certs to known bad actors. So I'm curious who is going to own the abuse handling for the CA.
3 replies →
This seems like a really great step toward an HTTPS web. It will be an immediately deployable solution that can hopefully TLS encryption normal and expected.
However, it doesn't do anything about the very serious problems with the CA system, which is fundamentally unsound because it requires trust and end users do not meaningfully have the authority to revoke that trust. And there's a bigger problem: if EFF's CA becomes the standard CA, there is now another single point of failure for a huge portion of the web. While I personally have a strong faith in the EFF, in the long term I shouldn't have to.
Agreed. For all the hoopla, this is basically just like any other CA (but free). Until we have a truly distributed (namecoin-esque) and accepted CA structure, signed certificates may as well be pipes directly to the NSA.
That said, not having to pay some jerk for sending me an email and having me enter a code is really nice. The current CA system is a pitiful excuse for identity verification, and not having to pay for it will be nice.
Here's my current issue with moving to TLS: library support.
I do a lot of custom stuff and want to run my own server. I can set up and run the server in maybe 50-100 lines of code, and it works great.
I know, I should conform and use Apache/nginx/OpenSSL like everyone else. Because they're so much more secure, right? By using professional code like the aforementioned, you won't get exposed to exploits like Heartbleed, Shellshock, etc.
But me, being the stubborn one I am, I want to just code up a site. I can open up a socket, parse a few text lines, and voila. Web server. Now I want to add TLS and what are my options?
OpenSSL, crazy API, issues like Heartbleed.
libtls from LibreSSL, amazing API, not packaged for anything but OpenBSD yet. Little to no real world testing.
Mozilla NSS or GnuTLS, awful APIs, everyone seems to recommend against them.
Obscure software I've never heard of: PolarSSL, MatrixSSL. May be good, but I'm uneasy with it since I don't know anything about them. And I have to hope they play nicely with all my environments (Clang on OS X, Visual C++ on Windows, GCC on Linux and BSD) and package managers.
Write my own. Hahah. Hahahahahahahahah. Yeah. All I have to do is implement AES, Camellia, DES, RC4, RC5, Triple DES, XTEA, Blowfish, MD5, MD2, MD4, SHA-1, SHA-2, RSA, Diffie-Hellman key exchange, Elliptic curve cryptography (ECC), Elliptic curve Diffie–Hellman (ECDH), Elliptic Curve DSA (ECDSA); and all with absolutely no errors (and this is critical!), and I'm good to go!
I'm not saying encryption should be a breeze, but come on. I want this in <socket.h> and available anywhere. I want to be able to ask for socket(AF_INET, SOCK_STREAMTLS, 0), call setsockcert(certdata, certsize) and be ready to go.
Everything we do in computer science is always about raising the bar in terms of complexity. Writing software requires larger and larger teams, and increasingly there's the attitude that "you can't possibly do that yourself, so don't even try." It's in writing operating systems, writing device drivers, writing web browsers, writing crypto software, etc.
I didn't get into programming to glue other people's code together. I want to learn how things work and write them myself. For once in this world, I'd love it if we could work on reducing complexity instead of adding to it.
Wow, of all the arguments I could think of against the current CA/TLS/HTTPS situation, a hobbyist deciding to write their own web server would not be one of them... Yes, you should just conform and stop doing this. Or at the very least you could let another process to TLS termination and just handle HTTP if you really want to create your own off-by-one remote code execution errors instead of using the ones supplied by apache et al.
> a hobbyist deciding to write their own web server would not be one of them
nginx started out as a hobby project by Igor Sysoev. Maybe he should have just used Apache too?
> Or at the very least you could let another process to TLS termination and just handle HTTP
A well-designed HTTPS->HTTP proxy package could work. Install proxy, and requests to it on 443 fetch localhost:80 (which you could firewall off externally if you wanted) and feed it back as HTTPS. Definitely not optimal, especially if it ends up eating a lot of RAM or limiting active connections, but it would be a quick-and-dirty method that would work for smaller sites.
But it won't handle other uses of TLS, such as if you wanted to use smtp.gmail.com, which requires STARTTLS. Or maybe you want to write an application that uses a new custom protocol, and want to encrypt that.
If you put this stuff into libc, and get it ISO standardized and simplified, and have it present out of the box with your compilers on each OS, then you'll open the door for developers to more easily take advantage of TLS encryption everywhere.
Look at the core API for GnuTLS: http://www.gnutls.org/manual/html_node/Core-TLS-API.html
This is just insane. It would take an average developer months to fully understand that API.
3 replies →
I'm in a similar position to you. LibTLS looks promising, but as you said, it's not tested (and not portable yet?)
> Let's Encrypt will be overseen by the Internet Security Research Group (ISRG), a California public benefit corporation. ISRG will work with Mozilla, Cisco Systems Inc., Akamai, EFF, and others to build the much-needed infrastructure for the project and the 2015 launch
What's Cisco's role in this? I'm quite worried about that. It has been reported multiple times that Cisco's routers have NSA backdoors in them, from multiple angles (from TAO intercepting the routers to law enforcement having access to "legal intercept" in them).
So I hope they are not securing their certificates with Cisco's routers...
Lawful Intercept isn't a blanket government back door per se. It's a featureset that allows the operator to configure what is effectively a remote packet capture endpoint. That endpoint is disabled by default and requires operator configuration to be enabled.
It just happens that every ISP/telco in the US needs this capability to comply with CALEA so it's manufacturers responding to market forces. Juniper supports it, A Latvian router manufacturer supports it (http://wiki.mikrotik.com/wiki/CALEA), there's even open source code to do it (https://code.google.com/p/opencalea/) if you're building your own routers.
There's a place to focus your ire over wiretapping. The manufacturers aren't it.
Yep. Read the product docs on some of their big network routers. You'd be surprised what's in there in terms of port mirroring and what they suggest carriers should use it for.
We have DNS system in place which should be enough to establish trust between browser and SSL public key. E.g. site could store self-signed certificate fingerprint in the DNS record and browser should be fine with that. If DNS system is spoofed, user will be in bad place anyway so DNS system must be secured in any case.
No. A proper certificate protects against malicious DNS resolver.
What you're talking about is being introduced alongside DNSSEC, and it's called DANE.
https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...
Two things:
1. I really hope this is hosted in a non-FVEY territory.
2. Why can't we set a date (say, 5 years?) when all browsers default to https, or some other encrypted protocol, and force you to type "http://" to access old, unencrypted servers?
Glad I don't work for a CA right now.
Will these certificates work with Internet Explorer and Chrome?
Yes. This CA will be cross-signed.
You can add your own CA to browsers.
You can and I can but 99.999% of normal users cannot and will not.
from the ACME spec, it looks like proof of ownership is provided via[0]:
>Put a CA-provided challenge at a specific place on the web server
or
> Put a CA-provided challenge at a DNS location corresponding to the target domain.
Since the server will presumably be plaintext at that point and DNS is UDP, couldn't an attacker like NSA just mitm the proof-of-site-ownership functionality of lets-encrypt to capture ownership at TOFU and then silently re-use it, e.g. via Akamai's infrastructure?
[0] https://github.com/letsencrypt/acme-spec/blob/master/draft-b...
Four things:
(1) You can do the attack you describe today with existing CAs that are issuing DV certs because posting a file on the web server is an existing DV validation method that's in routine use.
(2) There is another validation method we've developed called dvsni which is stronger in some respects (but yes, it still trusts DNS).
(3) We're expecting to do multipath testing of the proof of site ownership to make MITM attacks harder. (But as with much existing DV in general, someone who can completely compromise DNS can cause misissuance.)
(4) If the community finds solutions that make any step of this process stronger, Let's Encrypt will presumably adopt them.
Let's Encrypt can run a web spider - crawl the web to build a database of actively used domain names.
Periodically poll DNS for the list from that database to obtain the NS records for pretty much all of the web, and also A records for all the actively used hosts you find in the crawl. Keep this cache as a trace of how DNS records change.
Now, do the DNS polling from several different geographic locations. Now you've got a history of DNS from different viewpoints.
When you get a request for a certificate for, say, "microsoft.com", look up the domain name in the way described on the Lets Encrypt description. But also check that this IP address appears in the history, either from multiple locations for a few days, or from one location for a few months.
If this test fails, check if the historic IP addresses for this domain from the polled cache are already running TLS, signed by a regular CA. If so, reject the application.
Otherwise continue with validation in the way described on the Lets Encrypt web page.
1 reply →
agree completely and it's worth noting that i don't have a solution to the issues i mentioned, either.
leveraging other (potentially-insecure) paths to establish trust might help further enhance confidence in authenticity; e.g. verification using something like the broad-based strategy of moxie's perspectives (except via plaintext) or maybe through additional verification of plaintext on the site as fetched via tor or retrieving a cached copy of the site securely from the internet archive or search engines.
dvsni and multipath testing sound quite interesting, and i think defense in depth is the right approach.
having been at akamai's recent edge conference, i didn't hear much from them on this. does anyone have any additional details of their interest in the project?
3 replies →
This is great news! I'd also like to see a push for technologies like DANE (and necessarily DNSSEC) which address the flawed CA trust model.
While we're at it, let's get a non-profit domain registrar going.
> non-profit domain registrar
domain squatters are already an issue. imaging if you could register domains for free. I think having to pay $10 for a year is pretty fair. That's one reason I don't mind paying ~$70 for .io domain. it keeps most squatters away.
You misunderstand. Domains must not be free, and domain cost isn't the problem. Nonprofit registrar != free domains.
The problem is the horrible user experience of registrars like Godaddy. I'd rather give my money to a nonprofit that isn't confusing non-technical website owners into buying products they don't need.
The registrar landscape is better now with Gandi, but still I'd rather pay a fully transparent nonprofit registrar if one existed.
1 reply →
Won't people need to have LetsEncrypt CA certificate installed on their computers to not get that red SSL incorrect certificate thing? Other than that, this is awesome.
IdenTrust will be cross-signing our roots while we apply to root programs.
Thanks for the clarification! You might want to add that point to your technical how-it-works section[1]. I was wondering how older browsers would accept a new CA's signature.
Also, I really wish AOL would have donated their root certs to y'all[2] so you didn't have to set up a whole new CA.
[1]: https://letsencrypt.org/howitworks/technology/
[2]: https://moderncrypto.org/mail-archive/messaging/2014/000618....
4 replies →
The "How It Works" page (https://letsencrypt.org/howitworks/) says:
- Obtain a browser-trusted certificate and set it up on your web serve
IdenTrust is listed as a sponsor and is the CA for the letsencrypt.org certificate so I'm guessing they're doing some sort of partnership.
I mean ordinary people who will visit the page.
1 reply →
I just installed it including all its Python dependencies, and tried it on my Apache server, but it throws me tons of Python errors.
It would be super-awesome of you if you could let us know about those errors at
https://github.com/letsencrypt/lets-encrypt-preview/issues
or e-mail me about them. So far this has only been tested on a handful of configurations and will clearly need to be tested on many more over the next few months.
Please be careful when running it on your live server: if it does manage to get a cert right now, that cert won't be accepted by clients and will produce cert warnings (and if you use the "Secure" option at the end, you'll also be generating redirects from the HTTP site to the cert-warning-generating HTTPS version).
How does a CA that's formed by a conglomerate of U.S. companies (under the jurisdiction of the NSA) make us any safer than we are currently? It doesn't. The chain of trust chains up all the way to a U.S. company, which can be coerced into giving up the certificate and compromising the security of the entire chain. I'm on the side of the EFF trying to encrypt the web, but this is not the solution.
truth be told, it doesn't make anyone safer. it's a big fat placebo, especially once the NSA realizes that this project is entirely under their jurisdiction.
Now, if there was a project in Iceland or Seychelles that was doing something similar, I would be much more apt to participate.
Security theatre for the win(?) Do these people [EFF] not realize that the people they're trying to win over are network nerds? These are people that actually understand this shit and the repercussions of it.
I can't profess to understanding all the details of encryption infrastructure, but I learned very quickly in kindergarten, you can't trust anyone you don't know. It doesn't matter who they are, who they know or what they know. Half the time, you can't even trust "cold hard facts", the facts are frequently misinterpreted, fabricated or eventually proven to be wrong - once it was a fact that the earth was flat, then we were the centre of the universe, now the universe as we know it is held together by a God particle. Science claims facts that invalidate there being a God... all facts are a matter of our fallable understanding of this scientific instrument we are building. Even people you do trust can be coerced into doing things that compromise your ability to trust them or their motives.
If you want to automate trust, then you're eventually going to have to realize that you can't. All you can do is mitigate the cost of being wrong.
Absolute power corrupts absolutely - the CA (or whoever controls that CA) has absolute power in this scenario. If you have the director's family hostage, everyone else's security just went down the pan.
Chain of trust is like putting all your eggs in one basket. You just don't do it. Web of trust is a marginal step up, but it's more of a pain in the ass and can also be overcome by a group with malicious intent.
Something like Certificate Transparency would counter that - where the browser can only accept certificates that have been made public record. So the owners will at least know when their domain has been attacked.
A site owner would normally know when their logs are no longer accumulating traffic that something was wrong. When their site still appears to be up and they get as far as analyzing router logs to realize that they're actually getting no traffic, even though the site appears to be functioning normally would be a huge red flag that something is very wrong. I would expect any operations team worth their salt to understand this inside of 15 minutes anyway.
Certificate Transparency may help to alert people, it's certainly a step in the right direction, but it doesn't fix the problem in my mind. I honestly don't think the problem can be fixed. All we can do is try and mitigate the risk of our trust being broken.
So this means that GoDaddy, Namecheap, Verisign and other sellers/resellers of SSL certificates will need to lower their prices soon, right? Because in a short time many websites won't need to purchase one since they can get it free.
Also, have they built this system with a completely scalable distributed architecture? For it to be practical it needs to be performant.
Also, does the NSA have access to the core of this system?
You can already get free certificates from startssl today: https://www.startssl.com/?app=1
Aren't those only for personal (i.e. non commercial) websites?
My website only contains publically available stuff for people to read.
Is there any reason why I would want to use https for this use case?
Or what does "entire web" mean?
If you're not using HTTPS it is trivial for anyone in the middle of the "client to server and back" connection to change any of the content.
If you use HTTPS you prevent alterations to that traffic and people receive exactly what you expect they should receive.
Examples of recent ISP misbehaving on non-https websites just 25 days ago on HN: https://news.ycombinator.com/item?id=8500131
Note that the Verizon issue isn't anything entirely content altering but someone who lives in a country with strict monitoring of traffic could easily change the wording of your website to match their propaganda if you aren't using HTTPS.
So yes, your content is publicly available free stuff and no one is probably sending you user login credentials or credit cards but it still matters.
Only with https could you be sure that your visitors are viewing the exact information you published, and that the content has not been hijacked.
Over http it could conceivably have malicious or tracking content introduced without your knowledge.
Is there any reason why I would want to use https for this use case?
Yes it can help you stop:
ISPs inserting adverts into your content (this has happened)
Governments censoring your content or rewriting it
Governments putting people in jail for reading your publicly available (in your country) content, which is illegal in theirs
People impersonating your website
But if you don't want to use it, that's cool too. I suspect all websites will be encrypted at some point soon though, the disadvantages are getting less and less important.
>Governments putting people in jail for reading your publicly available (in your country) content, which is illegal in theirs //
How does that work, surely the gov can still see people accessing the information by monitoring network traffic and the info itself is still public. HTTPS doesn't encrypt the actual request traffic does it, and in any case the gov would still see which server the traffic is going to unless you're using something like tor [and possibly still then].
6 replies →
Sure! If I trust your site but not my ISP, then https allows me to trust the connection between us. That means that nobody can tamper with the content and inject some malicious JS. Also, the ISP could only tell that I am talking to your server, and not anything beyond that.
Yes, because it is no one's business what people are looking at anyways. If you have more than one URL, HTTPS will hide that.
HTTPS will also make an attacker unable to change your content.
Anyone along the way (like an ISP) could inject things into your webpage. Like ads https://arstechnica.com/tech-policy/2013/04/how-a-banner-ad-...
People could think they are reading an article from your site but actually they're not or the text was tampered with. With https you ensure people are actually reading what you published.
Honestly, you're probably not going to get a large personal benefit from this. The larger good is that you'll be helping move the Internet toward encrypted-by-default, which is an enormous societal benefit.
Like you, I don't host any private or remotely sensitive information. I'm encrypting my site because I think it's the right thing to do, even though there's little personal return on investment.
While this is nice and I'm happy to see such a product coming, I still don't see a free TLS solution for my smaller projects. Heroku will still charge me $20/mo for TLS even if I have my certificate. Cloudflare will also want to charge me to inspect TLS. I could drop both and get a Linode but then that costs too and is a pain to setup a server myself.
"With a launch scheduled for summer 2015, the Let’s Encrypt CA will automatically issue and manage free certificates for any website that needs them."
'Automatically?'
So we're replacing owning people by snooping on their HTTP traffic with owning people by directing them to fake websites digitally signed by "m1crosoft.com"?
... actually, yes, that is kind of an improvement.
A little vague on details.
Apache only or also Nginx?
Who is the CA?
No way I am running something like this on a production machine.
I like the idea but I would rather have the client just output the certificate and key in a dir so I can put the files where I need them and I can configure the changes to my webserver.
Also this does not solve the issue of a CA issuing certificates for your domain and doing MITM.
This is just a pre-announcement to let folks (OSes, hosting providers, other platforms) plan and do integration work. Per our own warnings, we definitely don't want this running on production machines until it launches in 2015.
Our Apache code is a developer preview, we'll be working on Nginx next.
ISRG will be operating a new root CA for this project. Although if you think that your choice of CA makes you more or less secure, you may not have understood how PKIX works -- you can buy a cert from whichever CA you like, but your adversary can always pick the weakest one to try to impersonate you.
> ISRG will be operating a new root CA for this project.
Are you going to be cross-signed by IdenTrust or something? If you're really going to try and create a new root CA from scratch, surely you will be impaled on the spike of low coverage for many years?
1 reply →
"ISRG will be operating a new root CA for this project."
Does that mean every client/browser will need to be updated to include the new CA? Or will it somehow be signed by other (competing) CAs?
I like the idea of this project, and I think it's a great thing for the Internet - I just worry that it will take a long time for it to be usable in practice.
2 replies →
We're putting out a protocol for requesting certs and validating domain control (that our new CA will support -- and we'll be cross-signed to work in all mainstream browsers) and we've already written a client for it that integrates with Apache and can edit your Apache config.
If you're comfortable editing your own Apache configs, then you'll only need to use the client to obtain the cert and not to manage its installation long-term. (The client does need to reconfigure the web server while obtaining the cert as part of the CA validation process.)
The protocol is openly published, so you can write your own client too, or follow the protocol steps manually -- or any web server developer or hosting platform can develop their own alternative client.
There does need to be some client to speak the protocol, but there's no attempt to force you to use it to manage your certs and configs if that's not what you want. The convenience is aimed at people who don't understand how to install a cert, or who think that process is too time-consuming.
ACME is a protocol for securely and automatically issuing certificates. Presumably Apache, Nginx, IIS and any other web server could take advantage of it.
They are creating a new CA for this purpose.
I can't imagine this will replace the manual learn-pay-experiment-error-success process we have at the moment, so experts will still be able to control the process manually.
It doesn't address CA MITM attacks but it does significantly reduce non-CA MITM attacks by remove the two primary objections to deploying SSL: cost and complexity. Bravo EFF - this could be the most significant step in SSL adoption ever.
For those who are wondering why sschueller is saying such things (when I first read his comment my reaction was "how the f* could this be limited to Apache?", which worried me since I mostly use lighttpd), see the How It Works page [1] of Let's Encrypt.
> I would rather have the client just output the certificate and key in a dir
Could not agree more.
[1] https://letsencrypt.org/howitworks/
"This code intended for testing, demonstration, and integration engineering with OSes and hosting platforms. Currently the code works with Linux and Apache, though we will be expanding it to other platforms."
https://github.com/letsencrypt/lets-encrypt-preview
I'm sure they'd support "manual setup". Not many sites would opt into running their software agent (yet). I'm expecting the client to come with a lot more benefits than "simple setup", though.
I think that plenty of site admins will be happy to run this software agent—remember, there are many site admins right now that aren't even bothered to set up TLS at all.
I run plenty of tools right now on production boxes that I personally haven't fully audited—we all do. This tool should be simple and widely used enough that it will be trustworthy.
For the cautious, it would be nice if the tool offered a mode that could be run as a normal user, even on a different machine. It'd have to be an interactive process:
1. "Enter domain to be signed." 2. "To validate ownership of the domain, create a TXT record on xxx.example.domain with 'na8sdnajsdnfkasdkey' as the value." 3. "Domain ownership has been validated. Please paste the CSR." 4. "The zone has been signed. Here is your certificate:"
Much less convenient (basically the same as the process with current CAs), but it would allow security-conscious admins to use the CA in a way that is comfortable for them. Since the tool is open source, it should be fairly easy for someone to write their own tool that speaks to the CA while providing this interactive process.
The tool is interesting and to be honest, I'll be comfortable using it (I'm not running anything high-profile or sensitive). However, the real news here is the new CA—the tool is merely a convenience.
I don't want to be a full-fledged sponsor but I'd love to see a donate function to their site. Once this is released if the CA is trusted by all the major browsers I am more than willing to shift all the money we spend in certs from all these other "authorities" to something constructive like this.
The real problem here is that http is unencrypted by default. It really should be encrypted so that passive listeners can't see the traffic. I know that this is no protection against man in the middle attacks, but at least WiFi sniffers and similiar would be stopped. State Actors would have to actively do something which might be registered. It would be a great improvement, because in the current system, most websites are going to stay unencrypted because it takes money and effort to set up a certificate. The millions of shared hosters won't do it by default.
What we can do: - Change the http protocol to be encrypted? - create an apache module that automatically does this and needs no setup time (generate private keys automatically?)
Of course there shouldn't be any indicator of this encryption in the adress bar of the browser.
Maybe it's too late.
This is an awesome idea. But I thought the whole idea of a certificate authority is so that we can trust that the CA has vetted the person/site that they have given the certificate to. If all they do is issue certs for free, all we get is encryption, but no identity verification.
With basic certs, the CA just verifies that the entity controls the website the cert is being issued for. The OP explains how Let's Encrypt will do that. (And if they appeared not to be doing that, no software vendors would include the CA in the trust list).
With an "Extended Validation" cert, the CA additionally verifies that they are who they say they are on the cert (not just that they control the (web)sites the cert was issued for). I'm not sure if Let's Encrypt plans on issuing EV certs, but if they are, they will have to comply with whatever verification standards are standard, in order for vendors not to revoke them from trusted stores. Same as anyone else.
Further upthread, Josh (one of the other people working on the project) explained that Let's Encrypt currently only has plans to issue DV certs, not EV certs.
https://news.ycombinator.com/item?id=8624634
This is because of the automation aspect. EV cert issuance involves a human being looking at offline identity; DV issuance involves proofs of control that can be checked online by a computer, just as existing DV issuance by existing CAs is based on such checks.
Wouldn't this result in putting all the eggs in a single basket ?
Beside, as an European, I'm not so excited that such initiative is under control of American Law. I suspect that American interests will prevail.
Would you like to spell out more explicitly which effects of U.S. jurisdiction you're most concerned with?
I agree that there are several possible effects of jurisdiction on CAs that people could reasonably be concerned with (whether as would-be certificate requestors or would-be relying parties), but I'm wondering which ones are concerning you most.
The effect is that the NSA, the FBI or others could obtain the private key of the EFF root CA through legal arm twisting and gagging.
Certificates are public, so there is no problem with certificate request.
If the project is US only, than it won't make much difference with the actual situation. It wasn't explicit in the announcement.
NSLs? US agencies are legally able to perform MITM attacks under US jurisdiction.
Even today you can have all your HTTP traffic encrypted and compressed, using Mozilla Janus[1] or Data Compression Proxy[2].
[1] https://addons.mozilla.org/en-US/firefox/addon/janus-proxy-c...
[2] https://chrome.google.com/webstore/detail/data-compression-p...
It would be nice to have support for ECDSA certificates. I've not found a CA yet who'll provide one of these, despite the fact that many clients to already support them. Unfortunately, after a brief look through client.py I can't see any support for this. Is there any good way of filing an RFE or contributing a patch?
ECDSA certs are much cheaper to decrypt, and there's still some places (especially mobile) where TLS is a noticeable overhead - it'd be great to have a CA that provides them.
Indeed, I think that might be viable: it certainly was for CloudFlare! And good ECC certainly is "modern security techniques and best practices". I would however be OK with RSA-2048 using SHA-256, I guess; it's what many other CAs currently use (and this is partially a lowest-denominator problem).
Comodo definitely has an ECC root available now, a cross-signed ECDSA root with secp384r1, signing a secp256r1 intermediate. (I had heard there were 3 others deployed out there, although off the top of my head I'm not clear about which they are, perhaps they're also cross-signed?)
Why is ECC so poorly deployed in TLS? I've heard indications Certicom had formerly aggressively asserted patents, hence the lack of ECC-supporting CAs; but I don't know which ones. I highly doubt they're still extant, however: many have since expired.
Do be aware however that ECDSA can present a huge hazard if the k value needed for every signature is even partially predictable and varies. Officially, it should be random, and very strongly random (even the first two bits being consistently predictable is cumulatively disastrous; using the exact same k to sign two different things is absolutely catastrophic and is how the Sony PS3 root keys were calculated!). If you have a strong PRF for your RNG, you should be fine (e.g. LibreSSL uses ChaCha20); if you want some insurance just in case, you can use a more-auditable and less fragile deterministic process with a strong PRF so the same signature always gets the same, unpredictable k (see RFC 6979), or a combination of the two approaches (e.g. BoringSSL). DSA also had this issue. If you haven't audited your RNG and know it's strong, maybe you should check it before you deploy ECDSA: if your system's headless, its entropy is running on empty and your idea of a mixing function is RC4, it might not be such a hot idea.
It'd be fantastic to have the option available to have an ECC root around, and let us have RSA or ECC certs. (Yes, you can cross-sign across algorithms.) Perhaps have ECDSA off by default for a while in light of the above, but it can provide very good performance for people who use modern software and turn it on!
I'd suggest using secp256r1 (aka NIST P-256). It's already deployed and 256-bit curves lie at about RSA-3072 strength (stronger than most deployed CAs now, which typically use RSA-2048). A few others have deployed secp384r1, but that was following the NSA's Suite B lead; I'm not sold on that being relevant at all. secp256r1 is also fairly fast, with some very well-optimised constant-time routines available in OpenSSL if you enable a flag or even faster ones if you're using 1.2 beta (expect something like a 200%-250% performance boost over the generic elliptic curve routines); it's not quite Curve25519 speed, but it isn't bad.
I do however acknowledge the extreme murkiness surrounding the generation of the NIST/SECG/X9.62 curves. That does present some concern to me. I tried to get to the bottom of that (see my posts on CFRG) and I can summarise what I found out as basically (now-expired/irrelevant) patent-related shenanigans. I'm not super-comfortable with that degree of opacity in my curves - however, I also don't know of any actual security problems with secp256r1 (or secp384r1) as they stand, providing they are properly implemented (very big proviso!). I don't think they're backdoored, but make sure you check the hinges on the front door, and I'd prefer a house with better foundations!
More transparently-produced curves (such as the Brainpool curves) do exist, but Brainpool is sadly very slow in software (less than half the speed than a good P256 routine, and no scope for optimisations).
So looking forward, CFRG (at IRTF) were asked by the TLS Working Group to recommend even better new curves: most probably Curve25519 in my opinion as that seems to admirably satisfy all the criteria for a fast strong mainstream curve, and probably one larger "extra paranoid" curve will be recommended which I really don't know what it'll be at this stage. These hopefully will be/are even faster, strong, and rigidly-explained, without murky origins. And hopefully there will be better algorithms than ECDSA (perhaps a Schnorr-based algorithm such as Ed25519, now the patent expired?). I very much doubt if all the supporting infrastructure for that like HSMs and widely-deployed software support will all be "ready" for this project, however, in the timeframe we'd like, however, so in the meantime, P256 or RSA-2048 I guess is OK.
In general: This is absolutely wonderful news. It, and the efficiency of TLS 1.2 (and later, TLS 1.3) will enable people to run TLS everywhere. I am very probably going to use it myself.
While the Israel-based CA StartCom do already offer free TLS certificates and I have previously lauded them for that, they pulled an absolutely unforgivable move detrimental to internet security as a whole in refusing to revoke and rekey certificates for free even exceptionally in the immediate wake of Heartbleed (and I do think they should have their CA status reviewed very harshly as a result or revoked, because I do not think that is compliant with CA/B guidelines: they definitely still have live signatures on compromised keys that have not been revoked, which is totally unacceptable). If this initiative means we can replace and dump bad CAs, it's even better news.
Hm? The overhead reduction is on the server. Verification IIRC is somewhat slower.
There are CAs that issue them, uh... someone managed to get one issued during the TLS WG meeting at IETF last week. I'd have to listen to the audio to find out what CA they used.
I can help get a ECDSA cert for you - my (personal) email is on my profile.
I'm hoping that one day soon, I'll be able to remove this line from my nginx config:
My web server will notice that I want SSL, but haven't specified a path to a cert. It will then go off and generate one and get it signed automatically using an API like the one being discussed. It will also handle renewing automatically when the time comes.
Automatic unconfigured behavior is bad, but something like a ssl_certificate_auto directive that's in the default config would make a lot of sense.
This is a great initiative. On the other hand, I'm beginning to think that security models based on any central authority will always be at risk of getting compromised from within. Techniques that allow trusted security to be established between two parties without the need for a third-party authority to validate them would be nice to see.
That's a great idea and I'm a big fan of the EFF. But what browser support will this have? Even if all browser on all platforms add this to their root certificates, how many years will it take before even half of the devices in use support it (remember the number of people still using windows XP!)
It's initially cross-signed by IdenTrust, which has wide browser support today.
Whatever happened to http://www.cacert.org/?
Browser adoption never happened – it's unfortunate but that's a critical barrier to entry for almost anything
is there a reason they don't use it on their own site (https://cacert.org/)?
They do. You just don't have their root installed so it gives an error. You can install their root here http://www.cacert.org/index.php?id=3
ACME sounds great. Copying codes from emails is suboptimal at best. Free certificate from command line and free revocation from the same client sound even better.
I just don't know about the automatic configuration tool. Like webpanels for managing a server, it has never worked for me.
How does this compare to StartSSL?
maybe they won't try to extort you if you require a revocation?
(StartSSL do even for paying customers... stay away)
Very interesting, it looks like they're working with IdenTrust on this. I wonder if it supports wildcard certs.
Like StartCom selling Class 2/3, running a CA is very expensive and I wonder how they plan on recouping the fees for this.
> running a CA is very expensive and I wonder how they plan on recouping the fees for this
Is it? Seems like it should be dirt cheap to me. It's basically just an API for generating and revoking certs.
WebTrust audits are the bulk of the cost. We got quoted $150k for our first audit. This is a yearly thing too.
You also have to pay for your own cage in a datacenter, the HSM, validation staff, etc...
I wonder how many of the cheap web hosts will impelent this. I think the increased hosting cost on top of the certificate itself also discourages people from using TLS. Wishful thinking, perhaps...
It's an interesting idea, I'm just not clear on how it works (even when looking at the "How it works" section) - e.g., how do I integrate this with... say, nginx?
TLS/SSL certificate setup is a pretty mechanical task. I would imagine their program detects common web servers (nginx, apache, etc), puts the private key somewhere, and points the configuration files at it.
afaik there will be an api to request just the certificate and you'll have to integrate it manually, or a special program which will automatically add it to nginx for you (presumably only for simple setups)
I just setup a ssl certificate on my website for the first time and it only took like 10 minutes all together. I don't get any warnings from any browsers and it was only. $10
And all because people like YOU donated :)
Thanks :)
btw if you want to donate too, here is the link: https://supporters.eff.org/donate
I'm so excited for this. I know both the people working on the team from the University of Michigan, and both are extremely smart people passionate about web encryption.
Wow, I had a goofy idea a few months ago that one day we could have some sort of non-profit/charity that just runs a free, as in beer and freedom, "common good" CA.
Looks neat.
This is great news, but I am wondering how they will handle revoking certificates. For example: Do we really want malware sites popping up with valid Ssl certificates?
Why not? If you own the domain you can get a DV cert, whether you use the domain for malicious purposes or not.
The certificate isn't saying "this website won't infect your computer", it's saying "you're talking to the real owner of this domain".
You can revoke the certificates from the command line. It's shown at the end of the video.
Ah, so there aren't plans to add this CA into web browsers. That makes more sense.
Finally! Man, is getting and managing certificates a pain in the *ss for our small shop that does a great number of small websites.
This news put a huge grin on my face. Let's hope Heroku drops that ridiculous $20 charge for SSL endpoint as well.
I hope they do wildcard certs as well.
Hi. This such an amazing project to work on. Who started this? Who came up with this idea?
Wondering how they're going to cover the costs of being a CA.
Whos auditing the ca?
Who's auditing the CA's currently trusted by your browser?
various 3rd parties. this is required by the cab forum which my browser requires as well.
inform yourself if you want to write stuff like that. even more, its sad that people think CAs have zero checking and just give what, money to browsers to be included? Thanksfully its not like that yet.
2 replies →
We intend to have a WebTrust audit, just like other CAs do.
thanks!
Who's auditing the auditors? Remember Moody's? It's not entirely analogous, but it's not far from it.
At some point down the chain, you have to rely on trust to some degree. Either disappear in to the wilderness and completely disconnect from the grid or - at some point - you have to trust someone.
All CAs for most browsers are all audited by a third party. That's what provides the trust.
https://cabforum.org/baseline-requirements/ https://www.mozilla.org/en-US/about/governance/policies/secu... https://www.mozilla.org/en-US/about/governance/policies/secu...
Geez the people without a clue wanting to patronize on HN - I'm telling you - nothing like that to lose faith in humanity.
Kudos to the EFF for making an easy-to-use tool to generate TLS certs!
Kudos also for creating the second CA to issue free certificates (the first being StartSSL).
The next step needs to be to man-in-the-middle (MITM) proof these certs. We still have to address that problem. We'll be talking about how the blockchain can be used to solve this problem tonight at the SF Bitcoin Meetup, if that interests you, you're welcome to come:
http://www.meetup.com/San-Francisco-Bitcoin-Social/events/18...
A primer can be found here: https://vimeo.com/100433057
The blockchain can't fix this problem - it's too large for most embedded devices, which do matter. It isn't a solution just because it's a 'cool new crypto idea' to every problem on the planet. Just because something uses crypto doesn't mean adding the blockchain to it makes it any better.
Hmm, I'm not sure exactly how the parent intends using it, but isn't something like the blockchain (i.e. a publicly auditable log of all changes to certificates) already being proposed for improving the current PKI infrastructure? Also, is the problem you have for embedded devices that they can't afford to download, store, and verify the full bitcoin block-chain? Surely there are compromise solutions that could be made? Not denigrating your comment btw, it's an interesting observation!
> The blockchain can't fix this problem - it's too large for most embedded devices
That's a wise observation, which is why we are making it accessible to such devices over a MITM-proof connection via DNSChain:
https://github.com/okTurtles/dnschain
NSA shilling in full force I see. You guys sure can trust this CA, I guarantee it :^)
wohoo. i hope this will be good.
http://benhxahoi168.com/nguyen-nhan-gay-benh-lau
Uh oh, this looks like it kills sslmate.com
Sorry agwa.
One possible solution is a BitCoin-like block chain of certificate proof, so that a website's certificate can be verified against the domain without a central authority.
That doesn't even remotely work, who has the private keys to authorize the certificates?
What authorization is required in this scenario? I'm talking about a novel idea here, one that doesn't fit into the existing CA model. There would be no CA in this scenario; verification would be decentralized, based on shared information, not on knowledge of a secret.
2 replies →
So, blockchain solutions do work, and here is how:
https://github.com/okTurtles/dnschain
You can replace all CAs with a single blockchain.
And we should do this, because this Let's Encrypt CA, while a great step forward, is still vulnerable to man-in-the-middle attacks, explained in this video:
https://vimeo.com/100433057
Can wait until summer 2015 for a free cert? CloudFlare offers Universal SSL: https://www.cloudflare.com/ssl
That allows cloudflare to read any traffic because you SSL to cloudflare and auth their private key...
That's not really a comparable solution, it requires giving up control of the domain.
For Cloudflare customers...
Those free certs will encrypt from the browser to CloudFlare's CDN. You still need to do something to encrypt from CloudFlare to the publishing webserver. Self-signing can work for that hop, though Let's Encrypt may wind up being smoother for sys admins.
We've been working with CloudFlare to drive HTTPS adoption, and plan to work with them further on integration.