Comment by ecjhdnc2025
8 months ago
I will remind HNers: is Cloudflare not the company that leaked sensitive data through cache files that were indexed by at least Google, and when the tech community were up in arms about the massive leakage of sensitive data, the CEO’s strategy was to turn up here and criticise Google for not deindexing quickly enough?
You get what you pay for.
That's one of the main reasons I'm leary about them. Such a big f-up is difficult to forget. It shows that they have a move fast and break things culture which for a company that is responsible for critical infrastructure feels wrong.
In response to this incident Cloudflare has made big engineering changes, including huge work to move away from C as much as possible.
The offending parsers were rewritten in Rust (https://github.com/cloudflare/lol-html), as well as WAF, image optimization, and a few others. Nginx is being replaced with a custom cache server.
New implementations are using either the Workers platform, or are written in Rust or Golang.
Memory safety doesn't fix fundamental design flaws.
1 reply →
I interviewed there once and they asked me what I would do if a service broke after a deployment. I said the first step was to revert to the last known good version and then investigate. Color me surprised when that was not the answer they expected.
Cloudflare's internal release tool suggests revert when monitoring detects failures during deployment, so this question doesn't describe Cloudflare's practices. There must have been something more to it, or it was a misunderstanding.
That's strange. What was the "correct" answer?
3 replies →
Was the correct answer related to cache invalidation?
I remember them criticising Google for not being faster at removing cached files. I don't remember them blaming Google for their screw up.
And let's be honest, if a big provider wants to offer cached versions of pages, they probably should have a way to purge those files in case there's a problem (eg: malware).
> I don't remember them blaming Google for their screw up.
You're putting words into my mouth.
I'm not sure what you were expecting people to take away from your message, with the way that it is worded. It may not have been the intent, but the particular way you expressed your point heavily implies it.
8 replies →
Um, you literally wrote:
Isn't that "them blaming Google for their screw up"?
3 replies →
This was a much needed reminder. Although, it's quite difficult to find a better DDoS mitigator which is better than CF, I still wouldn't trust them for everything. Especially, since they are most likely snooping on the decrypted HTTPS connections
> Although, it's quite difficult to find a better DDoS mitigator which is better than CF, I still wouldn't trust them for everything.
Adding Challenges, TLS fingerprinting and Rate Limiting is possible on just about every major CDN platform to be honest. I guess with CF it's more "ootb" though, where you don't really have to think too much about policies - but at the same time, you can't go as granular in those policies (e.g layered) as some others.