Comment by _wmd
8 years ago
Step 1) MITM the entire Internet, undermining its SSL infrastructure, build a business around it
Step 2) leak cleartext from said MITM'd connections to the entire Internet
I recently noted that in some ways Cloudflare are probably the only entity to have ever managed to cause more damage to popular cryptography since the 2008 Debian OpenSSL bug (thanks to their "flexible" ""SSL"" """feature"""), but now I'm certain of it.
"Trust us" doesn't fly any more, this simply isn't good enough. Sorry, you lost my vote. Not even once
edit: why the revulsion? This bug would have been caught with valgrind, and by the sounds of it, using nothing more complex than feeding their httpd a random sampling of live inputs for an hour or two
>edit: why the revulsion
I'd guess it's because of the crude and reductive way you describe the service cloudflare provides. I don't know what type of programming you do, but many small services don't have the infrastructure to mitigate the kind of attacks cloudflare deals with and they wouldn't be around without services like this.
I don't like the internet becoming centralized into a few small places that mitigate DDOS attacks like this, but I like the alternative (being held ransom by anyone with access to a botnet) even less.
I'm going to take a more even handed approach than what you're suggesting. Any time you work with a service like this you risk these kinds of things - it's part of the implicit cost/benefit analysis humans do every day. I'm not ready to throw out the baby with the bathwater because of one issue. I'm not sure what alternative you're suggesting (I didn't see any suggestions, just a lot of ranting, which might also contribute to the 'revulsion') but it doesn't sound any better than what we have.
So rather than demand fixes for the fundamental issues that enable ddos attacks (preventing IP spoofing, allowing infected computers to remain connected, etc), we just continue down this path of massive centralization of services into a few big players that can afford the arms race against bonnets.
Using services like Cloudflare as a 'fix' is wrecking the decentralized principles of the Internet. At that point we might as well just write all apps as Facebook widgets.
When in a tactical emergency do not say "and why is this shit raining down upon us?"
That is a separate step. First you either take cover or help.
8 replies →
>I'm not ready to throw out the baby with the bathwater because of one issue.
Extreme centralization of the Internet is not a "baby", except maybe in the sense of a cuckoo's egg.
But I'm willing to bet the mentality of this comment is highly representative of many web developers and service providers. They will not seek to fix anything, because they don't see this state of things as a problem in the first place.
How about... stop CLOUD THIS and CLOUD THAT.
Cloud means extreme centralization.
It means giving your data to a third party you don't control.
Why?
Why does our networked software have to assume a centralized topology?
In the days when developed countries had dialup, protocols (IRC, Email, etc.) were all decentralized. Today, all the famous developers live with fancy broadband internet connections and forgot what it's like to have to think about netsplits.
The result... all the software is either "online" or broken.
There shouldn't be an "online" or "offline". There should be "do I have access to server X currently?"
Why do we need Google Docs to collaborate on a document if we are all in the same classroom?
Why do we need centralized facebook server farms whose engineers post on highscalability how they enable us all to post petabytes of photos and comment to our friends?
Why do we need centralized sites to comment at all? Each thread is local to its parent.
Why does India need internet.org from facebook?
If communities could have a network that survives without an uplink to the outside world then DDOS from the global internet would just cut off that network's hosting of documents to outsiders. They'd still be able to do EVERYTHING locally - plan dinners, book a local appointment, send an email etc. and even post things out to the greater internet.
This is a future I want to see.
We already have mesh networks. We need more web based software to run these things.
That's what we are building at qbix.com btw.
Your why questions can all be answered by "It's cheaper than hiring a team to do it in-house". At the end of the day it's all about money and non-techy people are often the people in charge of the money.
1 reply →
I agree with you 100%.
Tim Berners-Lee, the "father" of World Wide Web, is currently advocating for exactly what you are asking for.
See: https://www.decentralizedweb.net/
lol, qbix.com connects to cloudflare.com
3 replies →
from wikipedia [1]: - Cloudflare was ranked in the 7th rank among the top 50 Bad Hosts by HostExploit.[41] The service has been used by Rescator, a website that sells payment card data
- Two of ISIS' top three online chat forums are guarded by Cloudflare
- An October 2015 report found that Cloudflare provisioned 40% of SSL certificates used by phishing sites with deceptive domain names resembling those of banks and payment processors.
and so on... WTF is wrong with those guys? money-first approach?
[1] https://en.wikipedia.org/wiki/Cloudflare#Criticism_and_contr...
Step 0) Obtain black funding from NSA budget to start and "VC invest" in a global CDN company...
(Now I'm trawling Crunchbase to see if I can work out which investors are NSA front companies, then I'm gonna look to see what _else_ them and their partners have invested in...)
Covertly get into a company that terminates ssl for half the internet, and... spill your precious secrets everywhere, instead of siphoning them off silently?
Plausible deniability? "How could we have known the flaw was exploited by NSA and FBI? We didn't know about the flaw at all!" When, actually, it was designed by NSA, before they created CF as an attack vector. Eventually the vuln is discovered as was inevitable, but because the caches were theoretically "public" no one notices all the drone strikes and parallel constructions correlated with CF use.
I don't actually believe that, but it isn't an unreasonable theory.
Not NSA, but the CIA funds and operates In-Q-Tel[1]. They've funded companies like Palantir and Keyhole (which became Google Earth).
[1] https://www.crunchbase.com/organization/in-q-tel
I should have done my research, but I walked away from an accepted offer at a company once I found out they took money from In-Q-Tel.
2 replies →
"Step 0) Obtain black funding from NSA budget to start and "VC invest" in a global CDN company..."
I once came up with that exact concept for a nation-state subversion. It would even pay for itself over time. I kept thinking back to it seeing the rise of the CDN's and the security approaches that trust them.
Long been rumoured in the more paranoid corners of the web they are intelligence front/partners.
Of course they're intelligence partners, perhaps not wittingly, but Cloudflare was designed from ground up to be one of the most interesting targets for every intelligence agency in the world.
After the Snowden leaks it really seems nonsensical to give Cloudflare the benefit of the doubt and assume that they aren't compromised.
Am I misunderstanding that this would be useful for parallel construction, but that the public failure actually subverts the usefulness of Cloudflare as a MTIM partnering with someone?
They also actively deter Tor use. I've cancelled subscriptions with Cloudflare-hosted sites because they make securely and anonymously browsing their sites a pain.
I'm running a side-project on Cloudflare and it's accessible through Tor without problems. I suspect this comes down to the settings a site owner sets up in their Cloudflare interface. It would stand to reason if for example you applied the highest security setting across the board, Tor and VPN users would get presented with a captcha.
I have been presented with a captcha by cloudflare many times without using tor or a VPN. It is the best way to divert users from your website. My natural reaction is that unless I absolutely need to use this particular website, I move to the next result on google. Websites who use cloudflare are suicidal.
2 replies →
Is this made clear in their UI? Do they have something saying "this setting will screw over many VPN users" and "this setting will screw over with all Tor users"? If not, it's in large part their problem as well.
1 reply →
It makes sense that they treat Tor like a probable adversary, but the cost analysis seems really flawed.
Sure, the proportion of requests passing through Tor are more likely to be malicious, but given the bandwidth constraints the adversary seems limited.
The costs aren't only the lost business from people like you, but people who should use Tor giving in. There's some wisdom to people even researching something as mundane as what their dog ingested using anonymized services, much less other medical questions.
CloudFlare is neither the first nor the biggest CDN. I can't recall Akamai having a hole this big. They're either more secure or better at keeping things quiet.
To be fair to CloudFlare, Google had a heap issue a few years back (maybe like 7 now) where internal flags and copies of argv (which Google use heavily for config) were clearly present in output from their HTTP frontends, including references to Borg before Borg was ever documented publicly.
Over in App Engine land, someone bypassed their JVM sandbox and managed to extract a copy of their JVM image, which included much of their revered base system statically linked into something like a 500mb binary.
Sorry, I'd have to go digging to find references to either of these incidents. At least in either case customer data wasn't leaking, but suffice to say it's a little bit of the pot calling the kettle black
And finally let's not forget the China incident, which rumour has it, resulted in a system compromise at Google right to the heart of their engineering organization. Of course they didn't get roasted like Yahoo recently did over their password leak
here's the JVM bypass: http://seclists.org/fulldisclosure/2014/Dec/26 / http://www.security-explorations.com/materials/se-2014-02-re... (see page 58 for some fun)
I'd like to see how much of a mess their argvs are
1 reply →
Off topic, but I find it really impressive that Google packed their system into a 500 millibit binary; wow!
Seriously, people, units and prefixes are case-sensitive.
Step "What does secure mean anyway") SSL terminate even sites that are not sending data to Cloudflare securely
Yup, this made it crystal clear, years ago, that Cloudflare's business incentives were and are at odds with a secure web.
I don't buy this argument.
A site using Flexible SSL is no less secure than one using http://, and in fact is more secure, because nobody can MitM the connection between CloudFlare and the end user. The only thing vulnerable is the connection between the website and CloudFlare (~~and only to MitM, not to passive sniffing~~ EDIT: this isn't true, see [1]), but that's a much smaller and much better-protected surface area.
Now it's quite obvious that the alternative SSL options are much better because they secure the data properly the whole way. But claiming that Flexible SSL is somehow undermining the security of the web is extremely hyperbolic.
[1]: The connection between the origin server and CloudFlare can in fact be passively sniffed. I thought Flexible SSL was the option to use an arbitrary self-signed cert, but it actually means no encryption.
35 replies →
To my sibling: the issue is that people can and do consider Flexible SSL "good enough", when it really isn't. It gets you the green lock and the warm fuzzies, but the page just isn't secure. A false sense of security is worse than no security, because no security at least is glaringly obvious.
7 replies →
You're absolutely right. Cloudflare is a "global active adversary"[1] and has done irreparable harm to the internet at large. This is just a small taste of what's surely to come from CloudFlare's massive influence. They've shown that they cannot be trusted with everyone's data.
[1] https://trac.torproject.org/projects/tor/ticket/18361
"This bug would have been caught with valgrind, and by the sounds of it, using nothing more complex than feeding their httpd a random sampling of live inputs for an hour or two"
Or prevented using abstraction that do bounds checking. Or even just used ragel with a memory safe language and prevented all issues like that from ever happening. Probably would have been less work even with the reimplementation of an http proxy from scratch.
>with a memory safe language and prevented all issues like that from ever happening.
drastically reduced, but not quite ever. For instance, use a GC language, especially in this domain, you might do some data pooling to reduce GC overhead. Maybe you forget to clear data in the pool. Same kind of error can result.
But yes, I feel like security sensitive stuff like this shouldn't be done in C / C++ any more.