Decentralisation fans take note: despite wanting to remain independent, the only effective solution was in this case to re-insert a giant global intermediary (Cloudflare) and block all the anonymous unaccountable Tor users.
If a decentralised system is to stay decentralised, it needs to consider spammy bad actors.
That's true, Cloudflare has mastered the art of DDoS mitigation and they have developed some amazing tools [1] to achieve that, and fortunately they are sharing some of this knowledge. With the advent of eBPF, I reckon that this kind of tooling will become more accessible and easy to deploy for people that do self-hosting. I also hope that DDoS mitigations based on web of trust or other type of cryptographic identity [2] will come about in the future, although I wouldn't hold my breath for that.
Their main form of mitigation is sheer size. On a smaller ISP you can just get your entire uplink saturated by the attack. Even if you correctly drop 100% of the attack packets that reach you, your system is still unusable.
Federated systems, like email, have used these anti-spam techniques for a long time.
Federated systems always evolve into an Oligarchy, like Gmail/Hotmail/Yahoo, etc. or like banks, JPMorganChase/GoldmanSachs/etc.
If you want decentralization, you should more go for something like https://notabug.io/ (P2P Reddit), which uses the GUN protocol (mine). Or any WebTorrent-based approach.
How come private contract clauses can't be initiated to protect from malicious actors?
What if I own a server and connect it to an ISP under an agreement where the ISP is accountable for clearly malicious behavior coming from its connection (regardless of origin)?
Then, that ISP requires the same agreement from me, and everyone connecting to that ISP, and on down the chain.
Wouldn't we all be very active in policing bad actors in the networks we manage?
1) This doesn't deal with botnets and other compromised devices. Would you want your ISP to terminate your service if you (or worse, your roommate) got a virus?
2) This would require ISPs to do even more invasive monitoring of all traffic to be in compliance. They'd essentially have to DPI everything, or even break TLS between you and your destination, to know if your traffic was malicious. No thank you.
3) Many ISPs simply don't care. A lot of malicious traffic comes from countries where ISPs will just look the other way for a bit of cash. I suppose we could come up with a system that depeers bad ISPs, but this would have tons of collateral damage to innocents as well as reintroducing the exact centralization we're trying to avoid (where's the "master list" of bad ISPs to depeer?)
Whatever the solution to bad actors online is, it isn't ISPs.
Probably worth thinking through why this isn't done already: firstly it's a lot of work, and secondly the cross-network accountability is very short on choices. After a certain point you have to decide whether you want to cut off a majority of the internet or put up with it.
It also requires de-anonymisation (so you can identify who the bad actor actually is!) - you wouldn't be allowed a Tor exit node on this network, for example.
It was far from the only effective solution. Probably just an easy path for someone who has no idea about DDoS attacks, but influenced by advertisement and propaganda. Volumetric attacks don't actually require centralized global intermediaries to mitigate, there are other ways to do it, and Layer 7 attacks are even application specific and should be handled by applications or by someone running them who understands all the specifics, but most definitely not by a global intermediary, as unaware of the specifics intermediary will reject plenty of legitimate traffic. And blocking Tor to mitigate Layer 7 attacks is pretty silly.
Also, it was only up to like very early 2000s when researchers of decentralized systems mostly ignored the existence of malicious actors, but later everyone became well aware of them and started considering how to deal with them.
I'm quite active on Mastodon and HN, thought this might be of interest.
Would you prefer the title were modified? The mods can do that. I thought that specifying what the DDoS mitigation was applied to would be helpful, though my presumption of Mastodon was in error, apologies.
it's right at the end of the article - the attacker was abusing the "create a preview card of any posted URL" feature - he'd post a link, wait for pleroma to go and grab the url to preview it, then narrow down which one was mine based on user agent
i added an upstream proxy and anonymised the user agent, so even if he were to do that, the most he'd find was my proxy box
I block all Tor traffic with iptables and ipset - which allows O(log n) lookup time for each request when checking it against the Tor list. I wonder if this would have been your end-all solution. http://ipset.netfilter.org/ipset.man.html
On the subject of the IP leaking: Note that IPv4 only has 2^32 addresses, and people can and do mass scan all of them (see here shodan.io). If your service is exposing any identifiable information (ie. if it's not completely blocking all non-cloudflare IPs) then it's fairly easy to find even if it's "unguessable".
If a customer wants to hide their IP then the best way to do it:
1. Onboard onto Cloudflare
2. Audit your app and ensure you aren't leaking your IP (are you sending email directly? making web calls directly? - make adjustments to use APIs of other providers, i.e. send emails via Sendgrid API, etc)
3. Change your IP (it was previously public knowledge in your DNS records)
PS: to the OP I tried to contact you via keybase, feel free to ping my email. We are working to improve the DDoS protection for attacks in the range you were impacted by and the product manager would enjoy your feedback if you're willing to share them in the new year.
Well, that would only work if the other end responds to a request to the IP address with a cert that includes the proper domain.
If you setup Cloudflare properly, then you only see a CF-based certificate, not that actual hostnames. Since you didn't send a proper hostname (unless you use PTR, which isn't reliable either) it'll use whatever default hostname it has configured (or just close the connection).
Or in a case like my setup, you'll get an empty 0-byte response if no Host: header is present. The certificate is a wildcard for the primary domain the server runs, not even related to the mastodon service.
And of course, this post contains enough information to probably nail it down but on the other hand, mass scanning the internet is a lot of trouble.
This is huge. There are a ton of mis-configured Apache and nginx reverse proxies out there that expose the primary domain name of the site being served. You can quickly test this for yourself by running "curl -vk https://your.ip.address" and see what pops up for the CN field or Location header.
Even worse is the pattern of requesting LetsEncrypt certificates for multiple domains on one certificate. Now all of a sudden you're leaking development server hostnames, peeling off the white label of multi-tenant, and making things easier for automated scanners.
I get it that security by hostname obscurity is a poor practice on its own, but there's also something to be said for cutting down a large amount of malicious traffic with some common best practices.
which is what led me to block all other IPs - it's not the hardest thing to just make an openssl req and get the common names of the certificate returned
especially if you know the hosting provider, which narrows down the ip space significantly
Is a federated system like Mastodon not setup in a way that users have access lists and if one server is down they simply connect to the next? I would expect to just limit the access to my server in a way that no illegal content is added to its storage and I don't have to pay horrendous fees for the network traffic and then just let the DDOS happen. At some point it needs to stop since the DDOSer will find nicer targets and the source of your problem will not have enough funds left. And then your service simply continues.
That's at least how I think a federated system should work. Not sure if reality matches that.
From my understanding of Mastodon, you register an account with an instance and that's where your account and data are stored. You then get an address which is something like me@instance.tld. Federated instances can then connect together to read and exchange information, but for the most part your data doesn't leave your instance. I imagine the idea behind this is you can choose (or host) where your data sits, but still interact with a large network of individuals. That said, my understanding of Mastodon is limited.
Also: how much just one asshole can screw things up for everyone.
Unfortunately, this is probabalistically likely as any community grows. Equivalent raging occured fairly early in the life of other social networks such as Usenet and The WELL.
Unlikely. From past experience (a 4-year-long online harassment and dirty-tricks campaign!) the police are generally clueless to online matters. The most common responses I've had were "we don't police the Internet", "there's no evidence" or "there's nothing we can do".
Unless you're, yanno, EMI Records, Sky TV or someone with political sway.
The most productive (best outcome) way of handling it tends to be to turn your OPSEC up to eleven and put all your XP into defence. Again, based on experience.
Thank you for your insight. As you point out, it's no alternative to a robust defence, I was just curious as to whether it would ever actually be investigated. I would probably report it still just for accountability (e.g. insurance). Especially now you can report online.
If you want to just have a piece of official documentation, why not. And if you know Putin and Xi personally maybe you even have a chance to get the DDOSer.
Another option since you already have OSSEC running.. Create a script for OSSEC that interacts with the cloudflare api to block the flooding IPs.
This way the flood doesn't get to your server.
Cloudflare I believe will remove the site if other nameservers are added. Simply using a non-CF registrar and having a backup authoritative nameserver will suffice.
Decentralisation fans take note: despite wanting to remain independent, the only effective solution was in this case to re-insert a giant global intermediary (Cloudflare) and block all the anonymous unaccountable Tor users.
If a decentralised system is to stay decentralised, it needs to consider spammy bad actors.
That's true, Cloudflare has mastered the art of DDoS mitigation and they have developed some amazing tools [1] to achieve that, and fortunately they are sharing some of this knowledge. With the advent of eBPF, I reckon that this kind of tooling will become more accessible and easy to deploy for people that do self-hosting. I also hope that DDoS mitigations based on web of trust or other type of cryptographic identity [2] will come about in the future, although I wouldn't hold my breath for that.
[1] https://blog.cloudflare.com/l4drop-xdp-ebpf-based-ddos-mitig... [2] https://identity.foundation
Their main form of mitigation is sheer size. On a smaller ISP you can just get your entire uplink saturated by the attack. Even if you correctly drop 100% of the attack packets that reach you, your system is still unusable.
Mastodon is FEDERATED, not decentralized.
Federated systems, like email, have used these anti-spam techniques for a long time.
Federated systems always evolve into an Oligarchy, like Gmail/Hotmail/Yahoo, etc. or like banks, JPMorganChase/GoldmanSachs/etc.
If you want decentralization, you should more go for something like https://notabug.io/ (P2P Reddit), which uses the GUN protocol (mine). Or any WebTorrent-based approach.
This [1] is just one of many sources you can use for changing your app behavior, or null routing / firewalling. It is well maintained.
[1] - https://github.com/firehol/blocklist-ipsets
and block all the anonymous unaccountable Tor users.
A single fediverse instance blocking TOR users doesn't make much of a difference: my instance still allows them, and I know of many that do.
How come private contract clauses can't be initiated to protect from malicious actors?
What if I own a server and connect it to an ISP under an agreement where the ISP is accountable for clearly malicious behavior coming from its connection (regardless of origin)?
Then, that ISP requires the same agreement from me, and everyone connecting to that ISP, and on down the chain.
Wouldn't we all be very active in policing bad actors in the networks we manage?
1) This doesn't deal with botnets and other compromised devices. Would you want your ISP to terminate your service if you (or worse, your roommate) got a virus?
2) This would require ISPs to do even more invasive monitoring of all traffic to be in compliance. They'd essentially have to DPI everything, or even break TLS between you and your destination, to know if your traffic was malicious. No thank you.
3) Many ISPs simply don't care. A lot of malicious traffic comes from countries where ISPs will just look the other way for a bit of cash. I suppose we could come up with a system that depeers bad ISPs, but this would have tons of collateral damage to innocents as well as reintroducing the exact centralization we're trying to avoid (where's the "master list" of bad ISPs to depeer?)
Whatever the solution to bad actors online is, it isn't ISPs.
2 replies →
Probably worth thinking through why this isn't done already: firstly it's a lot of work, and secondly the cross-network accountability is very short on choices. After a certain point you have to decide whether you want to cut off a majority of the internet or put up with it.
It also requires de-anonymisation (so you can identify who the bad actor actually is!) - you wouldn't be allowed a Tor exit node on this network, for example.
1 reply →
This would not actually stop anything, just create a bunch of lawsuits.
It was far from the only effective solution. Probably just an easy path for someone who has no idea about DDoS attacks, but influenced by advertisement and propaganda. Volumetric attacks don't actually require centralized global intermediaries to mitigate, there are other ways to do it, and Layer 7 attacks are even application specific and should be handled by applications or by someone running them who understands all the specifics, but most definitely not by a global intermediary, as unaware of the specifics intermediary will reject plenty of legitimate traffic. And blocking Tor to mitigate Layer 7 attacks is pretty silly.
Also, it was only up to like very early 2000s when researchers of decentralized systems mostly ignored the existence of malicious actors, but later everyone became well aware of them and started considering how to deal with them.
> Volumetric attacks don't actually require centralized global intermediaries to mitigate, there are other ways to do it
Well, don't leave us hanging, do enlighten us how
Can you provide an example of how to mitigate a volumetric attack without significant reliance on intermediaries?
4 replies →
hey, i'm the author of the article
really... surprised it got submitted here
incidentally i'm running pleroma, not mastodon. minor detail but you know
I'm quite active on Mastodon and HN, thought this might be of interest.
Would you prefer the title were modified? The mods can do that. I thought that specifying what the DDoS mitigation was applied to would be helpful, though my presumption of Mastodon was in error, apologies.
I'm not too bothered about correcting it, just thought it good to note
To avoid leaking IPs, you can use cloudflared tunnel. It might get pricy if you move a lot of bytes, but it’ll isolate you from IP leaking issues.
oh, i found out where the leak was
it's right at the end of the article - the attacker was abusing the "create a preview card of any posted URL" feature - he'd post a link, wait for pleroma to go and grab the url to preview it, then narrow down which one was mine based on user agent
i added an upstream proxy and anonymised the user agent, so even if he were to do that, the most he'd find was my proxy box
3 replies →
I block all Tor traffic with iptables and ipset - which allows O(log n) lookup time for each request when checking it against the Tor list. I wonder if this would have been your end-all solution. http://ipset.netfilter.org/ipset.man.html
On the subject of the IP leaking: Note that IPv4 only has 2^32 addresses, and people can and do mass scan all of them (see here shodan.io). If your service is exposing any identifiable information (ie. if it's not completely blocking all non-cloudflare IPs) then it's fairly easy to find even if it's "unguessable".
Cloudflare EM for DDoS Protection here.
If a customer wants to hide their IP then the best way to do it:
1. Onboard onto Cloudflare
2. Audit your app and ensure you aren't leaking your IP (are you sending email directly? making web calls directly? - make adjustments to use APIs of other providers, i.e. send emails via Sendgrid API, etc)
3. Change your IP (it was previously public knowledge in your DNS records)
At this point your IP should be unknown, so...
4. Use `cloudflared` and https://www.cloudflare.com/en-gb/products/argo-tunnel/ to have your server call us, rather than us call you (via DNS A / AAAA records)
Because this connects a tunnel from your server, you can configure iptables and your firewall to close everything :)
Here's the help info: https://developers.cloudflare.com/argo-tunnel/quickstart/
PS: to the OP I tried to contact you via keybase, feel free to ping my email. We are working to improve the DDoS protection for attacks in the range you were impacted by and the product manager would enjoy your feedback if you're willing to share them in the new year.
Is cloudflare affordable for an open source and low-funds project? (I honestly don't know the pricing, this isn't meant to be argumentative)
2 replies →
hey, OP here
I'm no longer on keybase, deleted it a few days ago - but I'm more than happy to share what I found if you want
pretty sure it's nothing groundbreaking though
other contact methods are listed on my profile
(edit: by OP I mean article author)
3 replies →
Well, that would only work if the other end responds to a request to the IP address with a cert that includes the proper domain.
If you setup Cloudflare properly, then you only see a CF-based certificate, not that actual hostnames. Since you didn't send a proper hostname (unless you use PTR, which isn't reliable either) it'll use whatever default hostname it has configured (or just close the connection).
Or in a case like my setup, you'll get an empty 0-byte response if no Host: header is present. The certificate is a wildcard for the primary domain the server runs, not even related to the mastodon service.
And of course, this post contains enough information to probably nail it down but on the other hand, mass scanning the internet is a lot of trouble.
This is huge. There are a ton of mis-configured Apache and nginx reverse proxies out there that expose the primary domain name of the site being served. You can quickly test this for yourself by running "curl -vk https://your.ip.address" and see what pops up for the CN field or Location header.
Even worse is the pattern of requesting LetsEncrypt certificates for multiple domains on one certificate. Now all of a sudden you're leaking development server hostnames, peeling off the white label of multi-tenant, and making things easier for automated scanners.
I get it that security by hostname obscurity is a poor practice on its own, but there's also something to be said for cutting down a large amount of malicious traffic with some common best practices.
1 reply →
That's an interesting side topic. What do services like Shodan do in an ipv6 world? Dumb brute force scanning seems unlikely.
They run NTP servers that are (were?) included in the NTP pool to fish for clients’ IPv6 addresses.
1 reply →
yep, i had the same thought
which is what led me to block all other IPs - it's not the hardest thing to just make an openssl req and get the common names of the certificate returned
especially if you know the hosting provider, which narrows down the ip space significantly
Is a federated system like Mastodon not setup in a way that users have access lists and if one server is down they simply connect to the next? I would expect to just limit the access to my server in a way that no illegal content is added to its storage and I don't have to pay horrendous fees for the network traffic and then just let the DDOS happen. At some point it needs to stop since the DDOSer will find nicer targets and the source of your problem will not have enough funds left. And then your service simply continues.
That's at least how I think a federated system should work. Not sure if reality matches that.
From my understanding of Mastodon, you register an account with an instance and that's where your account and data are stored. You then get an address which is something like me@instance.tld. Federated instances can then connect together to read and exchange information, but for the most part your data doesn't leave your instance. I imagine the idea behind this is you can choose (or host) where your data sits, but still interact with a large network of individuals. That said, my understanding of Mastodon is limited.
As others have noted: accounts are instance-bound.
Other Fediverse protocols -- I believe either Friendica or ... I think Hubzilla? -- have some level of account portability.
There's a fairly long-standing request for Mastodon to support this. For now, you can have accounts on other instances forwarded to your primary.
While you can export and import your own follows, followers of your account won't automatically redirect to the new home.
Masodon content however will syndicate across the Fediverse, and even some of my posts from a now-dead instance can (occasionally) be found.
> While you can export and import your own follows, followers of your account won't automatically redirect to the new home.
This information is outdated since October 11, 2019. You can move followers from one account to another in Mastodon 3.0.
1 reply →
No, accounts are instance-bound, you can't just log into another instance in the federation.
It just boggles my mind how petty and vindictive people can be.
Also: how much just one asshole can screw things up for everyone.
Unfortunately, this is probabalistically likely as any community grows. Equivalent raging occured fairly early in the life of other social networks such as Usenet and The WELL.
> Cloudflare logs really suck for non-enterprise customers.
Clouflare logs also suck for enterprise customers
In what way do they suck? We have incredibly detailed logs available to our customers.
https://developers.cloudflare.com/logs/about/
I want to write a bunch of code just to see decent logs. Thanks for passing the work to me.
Great write-up. As someone unfamiliar with the legal side of this, would it be worth the author contacting law enforcement of some kind?
Unlikely. From past experience (a 4-year-long online harassment and dirty-tricks campaign!) the police are generally clueless to online matters. The most common responses I've had were "we don't police the Internet", "there's no evidence" or "there's nothing we can do".
Unless you're, yanno, EMI Records, Sky TV or someone with political sway.
The most productive (best outcome) way of handling it tends to be to turn your OPSEC up to eleven and put all your XP into defence. Again, based on experience.
Thank you for your insight. As you point out, it's no alternative to a robust defence, I was just curious as to whether it would ever actually be investigated. I would probably report it still just for accountability (e.g. insurance). Especially now you can report online.
If you want to just have a piece of official documentation, why not. And if you know Putin and Xi personally maybe you even have a chance to get the DDOSer.
Another option since you already have OSSEC running.. Create a script for OSSEC that interacts with the cloudflare api to block the flooding IPs. This way the flood doesn't get to your server.
Also a good idea to have more than one DNS server, from separate providers denoted as authoritative for your domain. Assuming your CDN will let you.
Cloudflare I believe will remove the site if other nameservers are added. Simply using a non-CF registrar and having a backup authoritative nameserver will suffice.
on my own instance of Mastodon
Ever since they injected 70kb of JavaScript into my site, I don't trust Cloudflare anymore. I just checked, and other CDNs seem to be very expensive.
I wonder if there are any other ways to defend against DDoS?
Maybe looking for a host that helps with such matters? But then they will probably be more expensive, too?