Comment by digitalsushi
11 years ago
This certificate industry has been such a racket. It's not even tacit that there are two completely separate issues that certificates and encryption solve. They get conflated and non technical users rightly get confused about which thing is trying to solve a problem they aren't sure why they have.
The certificate authorities are quite in love that the self-signed certificate errors are turning redder, bolder, and bigger. A self signed certificate warning means "Warning! The admin on the site you're connecting to wants this conversation to be private but it hasn't been proven that he has 200 bucks for us to say he's cool".
But so what if he's cool? Yeah I like my banking website to be "cool" but for 200 bucks I can be just as "cool". A few years back the browsers started putting extra bling on the URL bar if the coolness factor was high enough - if a bank pays 10,000 bucks for a really cool verification, they get a giant green pulsating URL badge. And they should, that means someone had to fax over vials of blood with the governor's seal that it's a legitimate institute in that state or province. But my little 200 dollar, not pulsating but still green certificate means "yeah digitalsushi definitely had 200 bucks and a fax machine, or at least was hostmaster@digitalsushi.com for damned sure".
And that is good enough for users. No errors? It's legit.
What's the difference between me coughing up 200 bucks to make that URL bar green, and then bright red with klaxons cause I didn't cough up the 200 bucks to be sure I am the owner of a personal domain? Like I said, a racket. The certificate authorities love causing a panic. But don't tell me users are any safer just 'cause I had 200 bucks. They're not.
The cert is just for warm and fuzzies. The encryption is to keep snoops out. If I made a browser, I would have 200 dollar "hostmaster" verification be some orange, cautious URL bar - "this person has a site that we have verified to the laziest extent possible without getting sued for not even doing anything at all". But then I probably wouldn't be getting any tips in my jar from the CAs at the end of the day.
> A self signed certificate warning means "Warning! The admin on the site you're connecting to wants this conversation to be private but it hasn't been proven that he has 200 bucks for us to say he's cool"
no. It means "even though this connection is encrypted, there is no way to tell you whether you are currently talking to that site or to NSA which is forwarding all of your traffic to the site you're on".
Treating this as a grave error IMHO is right because by accepting the connection over SSL, you state that the conversation between the user agent and the server is meant to be private.
Unfortunately, there is no way to guarantee that to be true if the identity of the server certificate can't somehow be tied to the identity of the server.
So when you accept the connection unencrypted, you tell the user agent "hey - everything is ok here - I don't care about this conversation to be private", so no error message is shown.
But the moment you accept the connection over ssl, the user agent assumes the connection to be intended to be private and failure to assert identity becomes a terminal issue.
This doesn't mean that the CA way of doing things is the right way - far from it. It's just the best that we currently have.
The solution is absolutely not to have browsers accept self-signed certificates though. The solution is something nobody hasn't quite come up with.
The solution is something nobody hasn't quite come up with.
SSH has. It tells me:
WARNING, You are connecting to this site (fi:ng:er:pr:in:t) for the first time. Do your homework now. IF you deem it trustworthy right now then I will never bother you again UNLESS someone tries to impersonate it in the future.
That model isn't perfect either but it is much preferable over the model that we currently have, which is: Blindly trust everyone who manages to exert control over any one of the 200+ "Certificate Authorities" that someone chose to bake into my browser.
...and then if the fingerprint changes, you get something like this:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@ WARNING! THIS ADDRESS MAY BE DOING SOMETHING NASTY!! @@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
5 replies →
> SSH has.
IMHO no. We don't SSH to the same 46 servers everyday. But we do log into that many (or more) websites. Can you imagine the amount of homework users need to do in order for this to work?
Not to mention the amount of non-tech savvy users who just won't put up with it.
2 replies →
My browser also offers me to accept any self-signed certificate, I can investigate it and then I can accept it and it won't ever bother me again, until the certificate changes.
The problem is that this is a huge hassle for incidental visitors. Whereas SSH does not have incidental visitors. Same goes for email, if it's your own server, you know the cert to be the real one, and you can accept it, you're not bothered again.
Certificate Patrol can give you something like this for Firefox.
2 replies →
So this is where we stand:
I think there's a pretty blatant antipattern here, and I'm not talking about colourblind-proofing the browser chrome.
> Encrypted (Certified) COOL GREEN
I think we can agree that this case is correct. If you have a properly vetted cert, more power to you. The browser should tell your users that you do own this domain.
> Encrypted (Self-Signed) EVIL RED
Not quite. Your user does have the ability to permanently trust this certificate. However, if I am trying to access gmail.com over HTTPS, I better not get this error. Otherwise, I know for a fact someone is messing with me.
> Unencrypted NOTHING / NEUTRAL CHROME
This case should be eliminated. We need to stop publishing stuff over HTTP. Period. The browsers should start fast tracking dropping support for HTTP altogether so we don't even have to think about this case.
Now the solution for case #2 is that every time you buy a domain, your registrar should issue you a wildcard cert for that domain. Moreover, you should be able to use that private key + cert to sign additional certs for individual subdomains. That way we can eliminate all the CA's. We would essentially use the same infrastructure that already supports domain name registration and DNS instead of funding a completely parallel, yet deeply flawed CA industry. As a bonus, this way only your registrar and you may issue certs for your domain.
This is all castles in the sky, but IMO that's the correct solution.
31 replies →
To know something is insecure can be acceptable. To think something is secure when it isn't can be far more dangerous. I'm considering secure to mean encrypted and identity reasonably verified. Whatever your thoughts on the CA process it serves a purpose.
There are plenty of other things to complain about. EV for one.
It's actually more like this:
Authentication and encryption are fundamentally separate ideas, and the problem here is that the CA system mixes them together, when an optimal solution (read: encryption everywhere) would be to tackle them separately.
Doing financial work or communicating with friends/coworkers? Make sure you're connection is authenticated and encrypted.
Connecting to a blog? Encryption is a plus (and is the topic of this very HN post). But unencrypted is also okay.
The original CA system was not designed to defend against mass surveillance so it had little incentive to separate these concerns.
It's definitely an antipattern. It's hard to solve until we get HTTPS deployable everywhere, because the first browser to defect from this antipattern will lose all its users, so it's extremely important to push on HTTPS being deployable and deployed everywhere.
> It's just the best that we currently have.
No, I wouldn't say so. Having SSL is better than having nothing pretty much on any site. But if you don't want to pay $200 somebody for nothing, you would probably consider using http by default on your site, because it just looks "safer" to the user that knows nothing about cryptography because of how browsers behave. Which is nonsense. It's worse than nothing.
And CA are not "authorities" at all. They could lie to you, they could be compromised. Of course, the fact that this certificate has been confirmed by "somebody" makes it a little more reliable than if it never was confirmed by anyone at all, but these "somebodies", CA, don't have any control over the situation, it's just some guys that came up with idea to make money like that early enough. You are as good CA as Symantec is, you can just start selling certificates and it would be the same — except, well, you are just some guy, so browsers wouldn't accept these certificates so it's worth nothing. It's all just about trust, and I'm not sure I trust Symantec more than I trust you. (And I don't mean I actually trust you, by the way.)
For everyone else it's not really about SSL, security and CAs, it's just about how popular browsers behave.
So, no, monopolies existing for the reason they are allowed to do something are never good. Only if they do it for free.
> And CA are not "authorities" at all. They could lie to you, they could be compromised.
Actually just read their terms of service, which may as well be summarised as "we issue certificates for entertainment purposes only".
There's no question in my mind that the whole thing is a racket and militates against security (you generally don't even know all the evil organisations that your browser implicitly trusts - and all the organisations that they trust etc).
There are certainly other options too: here's my suggestion-
The first time you go to a site where the certificate is one you haven't seen before, the browser should show a nice friendly page that doesn't make a fuss about how dangerous it is, and shows a fingerprint image for the site that you can verify elsewhere, either from a mail you've been sent, and with a list of images from fingerprint servers it knows about that contain a record for that site shown next to it.
Once you accept, it should store that certificate and allow you access to that site without making a big fuss or making it look like it's less secure than an unencrypted site. This should be a relatively normal flow and we should make the user experience accessible to normal people.
It's basically what we do for ssh connections to new hosts.
The SSH approach is exactly what I was thinking of, where you know the fingerprint of the other side you're connecting to.
I believe verification should be done out-of-band, using some other way (e.g. advertising) to transmit the fingerprint to the users. I've used self-signed certificates to collaborate over HTTPS with people I know in real life, and all I do is give them little pieces of paper with my cert printed on them.
1 reply →
You're (almost) describing certificate pinning. Have a look at https://news.ycombinator.com/item?id=4010711
How would you rotate keys with that scheme?
4 replies →
"Treating this as a grave error IMHO is right because by accepting the connection over SSL, you state that the conversation between the user agent and the server is meant to be private."
This is misguided thinking, pure and simple. Because of this line of thinking, your everyday webmaster has been convinced that encrypting data on a regular basis is more trouble than it's worth and allowed NSA (or the Chinese or the Iranian or what have you) authorities to simply put in a tap to slurp the entire internet without even going through the trouble of targeting and impersonating. Basically, this is the thinking that has enabled dragnet surveillance of the internet with such ease.
but as user I can understand that an http site is insecure, while a self signed certificate might lead me into a false sense of security.
7 replies →
> no. It means "even though this connection is encrypted, there is no way to tell you whether you are currently talking to that site or to NSA which is forwarding all of your traffic to the site you're on".
That would be correct if you could assume that the NSA couldn't fake certificates for websites. But it can, so it's wrong and misleading. It's certificate pinning, notary systems etc. that actually give some credibility to the certificate you're currently using, not whatever the browsers indicate as default.
FWIW, (valid) rogue certificates have been found in the wild several times, CAs have been compromised etc. ...
I agree. A more common MITM, and that it actually would prevent, comes from a rogue wifi operator.
> FWIW, (valid) rogue certificates have been found in the wild several times, CAs have been compromised etc. ...
And it's only going to get worse as SHA-1 become more and more affordable to crack.
1 reply →
The NSA has no CA. The only attack they really have is brute force or server compromise - both of which undermine pinning.
22 replies →
Browsers shouldn't silently accept self-signed, but there is a class of servers where self-signed is the best we've got: connecting to embedded devices. If I want to talk to the new printer or fridge I got over the web, they have no way of establishing trust besides Tacking my first request to them.
I bought a camera the other day with the nifty feature of having an NFC tag embedded in it to guide your phone to launching (and installing, if necessary) the companion mobile app.
It occurred to me that this is a really good way of establishing a trust path: while they're only using it to guide you to the right app, they could embed a little public key in there. Then you could authenticate the new printer or fridge by physically being near it.
We'd have to extend our UIs a bit to cover these use cases (it should basically act like a trusted self-signed cert), and probably you only want to trust NFC certs for *.local.
Technically, there's no reason why a fridge couldn't have a signed cert tied to some dynamic DNS (e.g. <fridge-serial-number>.<manufacturer>.<tld>).
9 replies →
Oh god, they have internet fridges now? What on earth for?
1 reply →
>So when you accept the connection unencrypted, you tell the user agent "hey - everything is ok here - I don't care about this conversation to be private", so no error message is shown.
Maybe a security-conscious person thinks that, but the typical user does not knowingly choose http over https, and thus the danger of MitM and (unaccepted) snooping is at least as large for the former.
So it's somewhat debatable why we'd warn users that "hey, someone might be reading this and impersonating the site" for self-signed https but not http.
The use case for the CA system is to prevent conventional criminal activity -- not state-level spying or lawful intercept. The $200 is just a paper trail that links the website to either a purchase transaction or some sort of communication detail.
The self-signed cert risk has nothing to do with the NSA... if it's your cert or a known cert, you add it to the trust store, otherwise, you don't.
Private to the NSA and reasonably private to the person sitting next to you are different use cases. The current model is "I'm sorry, we can't make this secure against the NSA and professional burglars so we're going to make it difficult to be reasonably private to others on the network".
It's as if a building manager, scared that small amounts of sound can leak through a door, decided that the only solution is to nail all the office doors open and require you to sign a form in triplicate that you are aware the door is not completely soundproof before you are allowed to close it to make a phone call. (Or jump through a long registration process to have someone come and install a heavy steel soundproofed door which will require replacement every 12 months.)
After all, if you're closing the door, it's clearly meant to be private. And if we can't guarantee complete security against sound leaks to people holding their ear to a glass on the other side, surely you mustn't be allowed to have a door.
The person next to you in cafe can MITM a self-signed TLS connection just as easily as the NSA; and the NSA can probably MITM a CA-signed TLS session, since the U.S. government owns or has access to quite a few root certificates. So, "no self-signed certs" is really a measure to protect you from the lowest level of threat. Almost any attacker than can MITM http can MITM https with self-signed certs that you never verify in any way. Encryption without authentication is useless in communications.
Self-signed certificates are still better than http plain text. I understand not showing the padlock icon for self-signed certificates, I don't understand why you would warn people away from them when the worst case is that they are just as unsafe as when they use plain http. IMHO this browser behavior is completely nonsensical.
How would a browser know that the the self-signed certificate that was just presented for www.mybank.com is intended to be self-signed (show no error, but also show no padlock) or whether it's the result of a MITM attack because www.mybank.com is supposed to present a properly signed certificate (show error)?
How would you inform people going to www.mybank.com which is presenting a self-signed cert in a way that a) they clearly notice but that b) doesn't annoy you when you connect to www.myblog.com which also is presenting a self-signed cert?
7 replies →
No. Self-signed certificates are much worse because they bring a false sense of security.
A self-signed certificate is trivially MITMed unless you have a way to authenticate the certificate. At the moment CAs are the best known way to do that (and before anyone brings certificate pinning or WoT, they come with their own problems, please read this comment of mine https://news.ycombinator.com/item?id=8616766).
EDIT: You can downvote all you want but I'm still right.
Each time anyone repeats the "self-signed certificates are still better than HTTP plain text" lie is hurting everyone in the long run.
They're much worse, both for the users and from a security perspective. Self-signed certificates are evil unless you know exactly what you're doing and are in full control of both ends of the communication (in which case just trust it yourself and ignore the warnings).
8 replies →
Reddit discussion about this, with much of the same arguments there as here (and talking past each other just as much):
http://www.reddit.com/r/ProgrammerHumor/comments/2l7ufn/alwa...
The warning is designed to let people know that who you're talking to can't be proven, which is important when someone tries to impersonate a bank, or your email provider, or any other number of important sites.
6 replies →
Because encryption with SSL without trust of the SSL cert is meaningless. It might as well be not encrypted.
2 replies →
Here's one thing that's NOT the solution: throwing out all encryption entirely. Secure vs insecurse is a gradient. The information that you're now talking to the same entity as you were when you first viewed the site is valuable. For example it means that you can be sure that you're talking to the real site when you log in to it on a public wifi, provided you have visited that site before. In fact, I trust a site that's still the same entity as when I first visited it a whole lot more than a site with a new certificate signed by some random CA. In practice the security added by CAs is negligible, so it makes no sense to disable/enable encryption based on that.
Certificates don't even solve the problem they attempt to solve, because in practices there are too many weaknesses in the chain. When you first downloaded firefox/chrome, who knows that the NSA didn't tamper with the CA list? (not that they'd need to)
Moxie Marlinspike's Perspectives addon for Firefox was a good attempt to resolve some of the problems with self-signed certs.
Unfortunately, no browsers adopted the project, and it is no longer compatible with Firefox. There are a couple forks which are still in development, but they are pretty underdeveloped.
I wonder if Mozilla would be more likely to accept this kind of project into Firefox today, compared to ~4 years ago when it was first released, now that privacy and security may be more important topic to the users of the browser.
The solution, at least for something decentralized, seems to be a web of trust established by multiple other identities signing your public key with some assumption of assurance that they have a reasonable belief that your actual identity is in fact represented by that public key.
That's what PGP/GPG people seem to do, anyway.
Why can't I get my personally-generated cert signed by X other people who vouch for its authenticity?
> no. It means "even though this connection is encrypted, there is no way to tell you whether you are currently talking to that site or to NSA which is forwarding all of your traffic to the site you're on".
Well... that's true regardless, as the NSA almost certainly has control over one or more certificate authorities.
But I agree with the sentiment. :)
It's interesting that your boogeyman in the NSA and not scammers. I think scammers are 1000X more likely. Escpecially since the NSA can just see the decrypted traffic from behind the firewall. There's no technology solution for voluntarily leaving the backdoor open.
> or to NSA which
Nah. The NSA, or any adversary remotely approaching them in resources, has the ability to generate certificates that are on your browser's trust chain. Self-signed and unknown-CA warnings suggest that a much lower level attacker may be interfering.
Just a small nitpick: I'm pretty sure the NSA has access to a CA to make it look legit.
> The solution is absolutely not to have browsers accept self-signed certificates though. The solution is something nobody hasn't quite come up with.
We do have a solution that does accept self-signed certificates. The remaining pieces need to be finished and the players need to come together though:
https://github.com/okTurtles/dnschain
If you're in San Francisco, come to the SF Bitcoin Meetup, I'll be speaking on this topic tonight:
http://www.meetup.com/San-Francisco-Bitcoin-Social/events/18...
Let's Encrypt seems like the right "next step", but we still need to address the man-in-the-middle problem with HTTPS, and that is something the blockchain will solve.
I totally agree that CAs are a racket. There's zero competition in that market and the gate-keepers (Microsoft, Mozilla, Apple, and Google) keep it that way (mostly Microsoft however).
That being said: Identity verification is important as the encryption is worthless if you can be trivially man-in-the-middled. All encryption assures is that two end points can only read communications between one another, it makes no assurances that the two end points are who they claim to be.
So verification is a legitimate requirement and it does have a legitimate cost. The problem is the LOWEST barriers to entry are set too high, this has become a particular problem when insecure WiFi is so common and even "basic" web-sites really need HTTPS (e.g. this one).
It is not a legitimate requirement.
HTTP can be man-in-the-middled passively, and without detection; making dragnets super easy.
In order for HTTPS self signed certs to be effectively man-in-the-middled the attacker needs to be careful to only selectively MITM because if the attacker does it indiscriminately clients can record what public key was used. The content provider can have a process that sits on top of a VPN / Tor that periodically requests a resource from the server and if it detects that the service is being MITM then it can shut down the service and a certificate authority can be brought in.
Edit: Also, all this BS about how HTTPS implies security is besides the grandparent's point: certificates and encryption are currently conflated to the great detriment of security, and they need not be.
> HTTP can be man-in-the-middled passively, and without detection; making dragnets super easy.
Nothing can be man-in-the-middled passively, that makes no sense. That isn't what a MitM is. It requires active involvement by its very nature.
> In order for HTTPS self signed certs to be effectively man-in-the-middled the attacker needs to be careful to only selectively MITM because if the attacker does it indiscriminately clients can record what public key was used.
I genuinely don't understand what you're trying to say.
> The content provider can have a process that sits on top of a VPN / Tor that periodically requests a resource from the server and if it detects that the service is being MITM then it can shut down the service and a certificate authority can be brought in.
If the MitM originates from a specific location (e.g. a single Starbucks, a single hotel, an airport, etc) it would never be detected by that method.
> Also, all this BS about how HTTPS implies security is besides the grandparent's point: certificates and encryption are currently conflated to the great detriment of security, and they need not be.
Only MitM protections AND encryption provide a secure connection when together. Individually they're insecure.
If someone wants to come up with a security scheme which doesn't depend on certificates that would be fine. You just have to solve the encryption issue (easy) and the identity issue (hard).
4 replies →
That's all good in theory, but there have been demonstrated attacks against man-in-the-middle-able protocols and we've lacked the ability to respond usefully, precisely because the protocols were designed to be man-in-the-middle-able. Everyone knows it's happening and it's even easier to detect than your example, but there's nothing useful to do with that knowledge other than complain.
https://www.eff.org/deeplinks/2014/11/starttls-downgrade-att...
All the attacker needs to do is target the "CA" of the target.
For example, in an individual user situation, if the "CA" is a mac user, you use a local exploit, and export the private key from the Keychain. Done.
That's the standard motivation for CAs, but I don't buy it.
Most of the time, I'm much more interested in a domain identity than a corporate identity. If I go to bigbank.com, and is presented with a certificate, I want to know if I am talking to bigbank.com -- not that I'm talking to "Big Bank Co." (or at least one of the legal entities around the world under that name).
Therefore it would make much more sense if your TLD made a cryptographic assertment that you are the legal owner of a domain and that this information could be utilized up the whole protocol stack.
That would not have a legitimate cost, apart from the domain name system itself.
Without some kind of authentication, the encryption TLS offers provides no meaningful security. It might as well be an elaborate compression scheme. The only "security" derived from unauthenticated TLS presumes that attackers can't see the first few packets of a session. But of course, real attackers trivially see all the the traffic for a session, because they snare attackers with routing, DNS, and layer 2 redirection.
What's especially baffling about self-signed certificate advocacy is the implied threat model. Low- and mid-level network attackers and crime syndicates can't compromise a CA. Every nation state can, of course (so long as the site in question isn't public-key-pinned). But nation states are also uniquely capable of MITMing connections!
>The only "security" derived from unauthenticated TLS presumes that attackers can't see the first few packets of a session
Could you elaborate here? With a self-signed cert, the server is still not sending secret information in the first few packets; it just tells you (without authentication) which public key to use to encrypt the later packets (well, the public key to encrypt the private key for later encryption).
The threat model would be eavesdroppers who can't control the channel, only look. Using the SS cert would be better than an unencrypted connection, though still shouldn't be represented as being as secure as full TLS. As it stands, the server is either forced to wait to get the cert, or serve unencrypted such that all attackers can see.
There are no such attackers.
5 replies →
I'm not entirely sure I understand your point, so if I misunderstood you please correct me.
First, TLS has three principles that, if you lose one, it becomes essentially uselsss:
1) Authentication - you're talking to the right server
2) Encryption - nobody saw what was sent
3) Verification - nothing was modified in transit
Without authentication, you essentially are not protected against anything. Any router, any government can generate a cert for any server or hostname.
Perhaps you don't think EV certs have a purpose - personally, I think they're helpful to ensure that even if someone hijacks a domain they cannot issue an EV cert. Luckily, the cost of certificates is going down over time (usually you can get the certs you mentioned at $10/$150). That's what my startup (https://certly.io) is trying to help people get, cheap and trusted certificates (sorry for the promotion here)
Encryption without verification is not useless; it protects against snooping.
It doesn't prevent snooping -- you can still be MITM'd. It does however, make snooping much harder because it has to be done actively.
If you don't verify what is sent, I could easily send you a malicious web form. If you don't verify the key or cert behind the connection, anyone can claim to be x site.
4 replies →
The warning pages are really ridiculous. Why doesn't every HTTP page show a warning you have to click through?
But it's not like MITM attacks are not real. CAs don't realistically do a thing about them, but it is true that you can't trust that your connection is private based on TLS alone. (unless you're doing certificate pinning or you have some other solution).
You're absolutely right. From first principles, HTTP should have a louder warning than self-signed HTTPS.
Our hope is that Let's Encrypt will reduce the barriers to CA-signed HTTPS sufficiently, that it will become realistic for browsers to show warning indicators on HTTP.
If they did that today, millions of sites would complain, "why are you forcing us to pay money to CAs, and deal with the incredible headache of cert installation and management?". With Let's Encrypt, the browsers can point to a simple, single-command solution.
Thanks for doing this. It's really great and its something that clearly needs to happen.
The next step will be to replace the CA system with something actually secure, but that comes after we move the web to a place where most websites are at least trying.
1 reply →
Because HTTP does not imply security, HTTPS does. Without proper certificates, these guarantees are diluted; hence the warnings.
Why doesn't every HTTP page show a warning you have to click through?
Back in the Netscape days, it did. People got tired of clicking OK every time they searched for something.
Eventually maybe the browsers will do that. Currently far too many websites are HTTP-only to allow for that behavior, but if that changes and the vast majority of the web is over SSL it would make sense to start warning for HTTP connections. That would further reduce the practicality of SSL stripping attacks.
It's not enough to keep the snoops out - you need to KNOW you're keeping the snoops out. That's what SSL helps with. A certificate is just a key issued by a public (aka trusted) authority. Sites can also choose to verify the certificate: if this is done, even if a 3rd party can procure a fake cert, if they don't have the same cert the web server uses, they can't snoop the traffic.
Site: Here's my public key. Use it to verify that anything I sent you came from me. But don't take my word for it, verify it against a set of trusted authorities pre-installed on your machine.
Browser: Ok, your cert checks out. Here's my public key. You can use it for the same.
Site: Ok, now I need you to reply this message with the entire certificate chain you have for me to make sure a 3rd party didn't install a root cert and inject keys between us. Encrypt it with both your private key and my public key.
Browser: Ok, here it is: ASDSDFDFSDFDSFSD.
Site: That checks out. Ok, now you can talk to me.
This is what certificates help with. There are verification standards that apply, and all the certificate authorities have to agree to follow these standards when issuing certain types of SSL certificates. The most stringent, the "Green bar" with the entity name, often require verification through multiple means, including bank accounts. Certificate authorities that fail to verify properly can have their issuing privileges revoked (though this is hard to do in practice, it can be done).
Here's some comparison screenshots of the "bling" that is being described (hard to even tell that some of these sites are SSL'd without getting the EV)
https://www.expeditedssl.com/pages/visual-security-browser-s...
I'm pissed off 'cos I'm on the board for rationalwiki.org and we have to pay a friggin' fortune to get the shiny green address bar ... because end users actually care, even as we know precisely what snake oil the whole SSL racket is. Gah.
I'm all for CAs to burn in a special hell. The other cost, though, was always getting a unique IP. Is that still a thing? Has someone figured out multiple certificates for different domains on the same IP? Weren't we running out of IPv4 at some point?
Yes, there are two main mechanisms, each with its own limitations.
https://en.wikipedia.org/wiki/SubjectAltName https://en.wikipedia.org/wiki/Server_Name_Indication
The thing is, without a chain of trust, the self-signed certificate might be from you or it might be from the "snoops" themselves. Certificates that don't contain any identifying information are vulnerable to man-in-the-middle attacks.
I have some certificates through RapidSSL, and when they send me reminders to renew, the e-mails come with this warning:
"Your certificate is due to expire.
If your certificate expires, your site will no longer be encrypted."
Just blatantly false.
They might as well say something even more ominous: "If your certificate expires, your site will no longer be accessible."
Of course, we know that's not true either, but try explaining to your visitors how to bypass the security warning (newer browsers sure don't make it obvious, even if you know to look for it).
I just bought a cert on Saturday for $9. It's less than the domain name.
$9 is a big step up from free, which is what the rest of my blog costs.
Is your blog a .tk site? Where else would you get a free domain?
Can get them free for web use. Not sure where he is coming from.
Wildcard SSL certs are ~$100/year. Those have always been much more of a racket, but they're so worth the extra cost to set them up once on your load balancers and not have to think about SSL certs again for 5+ years.
> 200 bucks for us to say he's cool
There are trusted free certificates as well, like the ones from StartSSL.
> if a bank pays 10,000 bucks for a really cool verification, they get a giant green pulsating URL badge
Yeah, $10 000 and legal documentation proving that they are exactly the same legal entity as the one stated on the certificated. All verified by a provider that's been deemed trustworthy by your browser's developers.
Finally, if a certificate is self-signed, it generally should be a large warning to most users: the certificate was made by an unknown entity, and anybody may be intercepting the comunication. Power-users understand when self-signed CAs are used, but they don't get scared of red warnings either, so that's not an issue.
This certificate industry has been such a racket. It's not even tacit that there are two completely separate issues that certificates and encryption solve. They get conflated and non technical users rightly get confused about which thing is trying to solve a problem they aren't sure why they have.
But a man-in-the-middle attack will remove any secrecy encryption provides and to prevent that, we require certificate authorities to perform some minimal checks that public keys delivered to your browser are indeed the correct ones.
You've got a point about how warnings are pushing incentives towards more verification, but they serve a purpose that aligns with secrecy of communication.
Wasn't WOT (Web Of Trust) supposed to fix this? Basically, I get other people to sign my public key asserting that it's actually me and not someone else, and if enough people do that it's considered "trusted", but in a decentralized fashion that's not tied to "authorities"?
no it means a trusted third party has not verifed who you are connecting to is who he/she says they are
Perhaps you should understand a system before slandering it? As others have said, encryption without authentication is useless.
Running a CA has an associated cost, including maintenance, security, etc. That's what you pay for when you acquire a certificate. Whether current market prices' markup is too high would be a different question, but paying for a certificate is definitely not spending 200$ to look cool.
CAs are the best known way (at the moment) to authenticate through insecure channels (before anyone brings pìnned certs or WoT, read this comment of mine: https://news.ycombinator.com/item?id=8616766)
EDIT: You can downvote all you want but I'm still right. Excuse my tone, but slandering a system without an intimate understanding of the "how"s and the "why"s (i.e. spreading FUD) hurts everyone in the long run.
That's the third comment of yours in which I've seen you taunt downvoters via edits in this thread alone. That's why I'm downvoting you. Knock it off, please.
I'm sorry it came across as a taunt, I didn't mean it like that.
Downvote sprees without an explanation detract from healthy discussion since they basically mean "I'm so mad about how wrong you are that I don't even care about why you think you are right".
I guess I'll just ignore them...
1 reply →