Comment by Bender

10 days ago

Looks like it is hosted in Equinix in NL? Or just part of it maybe? Is it behind a load balancer, maybe something like HAProxy? If so were stick tables set up to limit rates by cookie and require people be logged in on unique accounts and limit anonymous access after so many requests? I know limiting anonymous access is not great but that is something that could be enabled when under a high load so that instead of the site going offline for everyone it would just be limited for the anonymous users. Degradation vs critical outage

On a separate note have tcpdump captures been done on these excessive connections? Minus the IP, what do their SYN packets look like? Minus the IP what do the corresponding log entries look like in the web server? Are they using HTTP/1.1 or HTTP/2.0? Are they missing any expected headers for a real person such as cors, no-cors, navigate, accept_language?

    tcpdump -p --dont-verify-checksums -i any -NNnnvvv -B32768 -c32 -s0 port 443 and 'tcp[13] == 2'

Is there someone at OpenStreetMap that can answer these questions?

Disclosure: I am part of the mostly volunteer run OpenStreetMap ops team.

Technically we able to block and restrict the scrapers after the initial request from an IP. We've seen 400,000 IPs in the last 24 hours. Each IP only does a few requests. Most are not very good at faking browsers, but they are getting better. (HTTP/1.1 vs HTTP/2, obviously faked headers etc)

The problem has been going on for over a year now. It isn't going away. We need journalists and others to help us push back.

  • Hey. I run a small community forum and I've been dealing with this exact same kind of behaviour where well over 99% of requests are bad crawlers. There used to be plenty of "tells" for the faked browsers, HTTP/1.1 being a huge one. As you said, however, they're getting a bit smarter about that and it's becoming increasingly difficult to differentiate it from legitimate traffic.

    It's been getting worse over the past year, with the past few weeks in particular seeing a massive change literally overnight. I had to aggressively tune my WAF rules to even remotely get things under control. With Cloudflare I'm aggressively issuing browser challenges to any browser that looks remotely suspicious, and the pass rate is currently below 0.5%. For my users' sake, a successful browser challenge is "valid" for over a month, but this still feels like another thing that'll eventually be bypassed.

    I'd be keen to know if you've found any other effective ways of mitigating these most recent aggressive scraping requests. Even a simple "yes" or "no" would be appreciated; I think it's fair to be apprehensive about sharing some specific details publicly since even a lot of folks here on HN seem to think it's their right to scrape content with orders of magnitude higher throughput than all users combined.

    I really don't know how this is sustainable long-term. It's eaten up quite a lot of my personal time and effort just for the sake of a hobby that I otherwise greatly enjoy.

  • I hear ya. This is just my opinion but I don't think journalists are going to be much help. The bots would have to be hurting something belonging to the government or the government is paying for to really get them on it. e.g. some big orgs in the government embed your maps on their site. They would have to create legislation and then someone would have to trace the bots back to their operator for attribution and then someone would have to file lawsuits against them once it is illegal. Or you could try using a ToS/AuP to go after them assuming attribution. I am not a lawyer.

    I think your only hope would be to either find subtle differences between them and real legit users or change how your site works so that bots have to be authenticated unless they have a whitelisted IP/CIDR or put your site behind something else that spots the bots. Beyond that all anyone can do is beef up their infrastructure to handle much more than the bots could dish out.

    Have you tried silly simple things like hidden javascript puzzles the browser has to solve?

I think it could be worth trying to block them with TLS fingerprinting, or since they think it's residential proxies they are being hammered by, https://spur.us could be worth a try.

  • My personal preference is to first make a small amount of effort finding something unique to the bots that can more often than not be dropped with a simple firewall rule or load balancer ACL. The botters almost always miss something.