Comment by martinald
5 hours ago
Many reasons but DDoS protection has massive network effects. The more customers you have (and therefore bandwidth provision) the easier it is to hold up against a DDoS, as DDoS are targeting just one (usually) customer.
So there are massive economies of scale. Small CDN with (say) 10,000 customers and 10mbit/sec per customer can handle 100gbit/s DDoS (way too simplistic, but hopefully you get the idea) - way too small.
If you have the same traffic provisioned on average per customer and have 1 million customers, you can handle a DDoS 100x the size.
Only way to compete with this is to massively overprovision bandwidth per customer (which is expensive, as those customers won't pay more just for you to have more redundancy because you are smaller).
In a way (like many things in infrastructure) CDNs are natural monopolies. The bigger you get -> the more bandwidth and PoP you can have -> more attractive to more customers (this repeats over and over).
It was probably very astute of Cloudflare to realise that offering such a generous free plan was a key step in this.
Your argument is technically flawed.
In a CDN, customers consume bandwidth; they do not contribute it. If Cloudflare adds 1 million free customers, they do not magically acquire 1 million extra pipes to the internet backbone. They acquire 1 million new liabilities that require more infrastructure investment.
All you are doing is echoing their pitch book. Of course they want to skim their share of the pie.
> In a CDN, customers consume bandwidth; they do not contribute it
They contribute money which buys infrastructure.
> If Cloudflare adds 1 million free customers,
Is the free tier really customers? Regardless most of them are small that it doesn't cost cloudflare much anyways. The infrastructure is already there anyways. Its worth it to them for the good will it generates which leads to future paying customers. It probably also gives them visibility into what is good vs bad traffic.
1 million small sites could very well cost less to cloudflare than 1 big site.
I imagine every single customer is provisioned based on some peak expected typical traffic and that's what they base their capital investment in bandwidth on.
However most customers are rarely at their peak, this gives you tremendous spare capacity to use to eat DDoS attacks, assuming that the attacks are uncorrelated. This gives you huge amounts of capacity that's frequently doing nothing. Cloudflare advertise this spare capacity as "DDoS protection."
I suppose in theory it might be possible to massively optimise utilisation of your links, but that would be at the cost of DDoS protection and might not improve your margin very meaningfully, especially is customers care a lot about being online.
You're missing the economies of scale part.
OP is saying it's cheaper overall for a 10 million customer company to add infrastructure for 1 million more than it is for a 10,000 customer company to add infrastructure for 1000 more people.
If you're looking at this as a "share of the pie", it's probably not going to make sense. The industry is not zero sum.
And how many companies want to also be able to build out their own CDN?
Not every company can be an expert at everything.
But perhaps many of us could buy a different CDN than the major players if we want to reduce the likelihood of mass outages like this though.
In my opinion, DDoS is possible only because there is no network protocol for a host to control traffic filtering on upstream providers (deny traffic from certain subnets or countries). In this case everybody would prefer write their own systems rather than rely on a harmful monopoly.
The recent Azure DDoS used 500k botnet IPs. These will have been widely distributed across subnets and countries, so your blocking approach would not have been an effective mitigation.
Identifying and dynamically blocking the 500k offending IPs would certainly be possible technically -- 500k /32s is not a hard filtering problem -- but I seriously question the operational ability of internet providers to perform such granular blocking in real-time against dynamic targets.
I also have concerns that automated blocking protocols would be widely abused by bad actors who are able to engineer their way into the network at a carrier level (i.e. certain governments).
> 500k /32s is not a hard filtering problem
Is this really true? What device in the network are you loading that filter into? Is it even capable of handling the packet throughput of that many clients while also handling such a large block list?
It also completely overlooks the fact that some of the traffic has spoofed source IP addresses and a bad actor could use automated black holing to knock a legitimate site offline.
1 reply →
What traffic would you request the upstream providers to block if getting hit by Aisuru? Considering the botnet consists of residential routers, those are the same networks your users will be originating from. Sure, in best case, if your site is very regional, you can just block all traffic outside your country - but most services don't have this luxury.
Blocking individual IP addresses? Sure, but consider that before your service detects enough anomalous traffic from one particular IP and is able to send the request to block upstream, your service will already be down from the aggregate traffic. Even a "slow" ddos with <10 packets per second from one source is enough to saturate your 10Gbps link if the attacker has a million machines to originate traffic from.
In many cases the infected devices are in developing countries where none of your customers is. Many sites are regional, for example, a medium business operating within one country, or even city.
And even if the attack comes from your country, it is better to block part of the customers and figure out what to do next rather than have your site down.
Could it not be argued that ISPs should be forced to block users with vulnerable devices?
They have all the data on what CPE a user has, can send a letter and email with a deadline, and cut them off after it expires and the router has not been updated/is still exposed to the wide internet.
5 replies →
> here is no network protocol for a host to control traffic filtering on upstream providers (deny traffic from certain subnets or countries).
There is no network protocol per se, but there is commercial solutions like fortinet that can block countries iirc, but to note that it's only ip range based so it's not worth a lot
I think parent means: there no network protocol which can propagate blocking in sane manner between providers (something like bgp for firewalls)
edit: yes, you can you bgp to blockhole subnet traffic - the standard doesn't play well if you want blackhole unrelated subnets from upstream network
Unless you filter at the far end of the bottleneck you still go offline.
I'm pretty sure BGP magic will let you blackhole a whole subnet.