Comment by simondotau
14 hours ago
The more things change, the more they stay the same.
About 10-15 years ago, the scourge I was fighting was social media monitoring services, companies paid by big brands to watch sentiment across forums and other online communities. I was running a very popular and completely free (and ad-free) discussion forum in my spare time, and their scraping was irritating for two reasons. First, they were monetising my community when I wasn’t. Second, their crawlers would hit the servers as hard as they could, creating real load issues. I kept having to beg our hosting sponsor for more capacity.
Once I figured out what was happening, I blocked their user agent. Within a week they were scraping with a generic one. I blocked their IP range; a week later they were back on a different range. So I built a filter that would pseudo-randomly[0] inject company names[1] into forum posts. Then any time I re-identified[2] their bot, I enabled that filter for their requests.
The scraping stopped within two days and never came back.
--
[0] Random but deterministic based on post ID, so the injected text stayed consistent.
[1] I collated a list of around 100 major consumer brands, plus every company name the monitoring services proudly listed as clients on their own websites.
[2] This was back around 2009 or so, so things weren't nearly as sophisticated as they are today, both in terms of bots and anti-bot strategies. One of the most effective tools I remember deploying back then was analysis of all HTTP headers. Bots would spoof a browser UA, but almost none would get the full header set right, things like Accept-Encoding or Accept-Language were either absent, or static strings that didn't exactly match what the real browser would ever send.
I did something similar with someone who was using my site’s donation form to test huge batches of credit cards numbers. I would see hundreds of attempted (and mostly declined) $1 donations start pouring in, and I’d block the IP. A little while later it would restart from another IP. When it became clear they were not giving up easily, I changed tack: instead of blocking them, I would return random success/failure messages at the same rate they were seeing success on previous attempts. I didn’t really try to charge those cards, of course.
I like how this kind of response is very difficult for them to detect when I turn it on, and as a bonus, it pollutes their data. They stopped trying a few days after that.
Was it always $1? If I was the attacker, surely you’d pick a random number. My guess is that $1 donations would be an outlier in the distribution and therefore easy to spot.
It’s also interesting that merchants (presumably) don’t have a mechanism to flag transactions as being >0% chance of being suspect. Or that you waive any dispute rights.
As a merchant, it would be nice if you could demand the bank verify certain transactions with their customer. If I was a customer, I would want to know that someone tried to use my card numbers to donate to some death metal training school in the Netherlands.
Thank you very much for the observation about headers. I just looked closer at the bot traffic I'm currently receiving on my small fediverse server and noticed that it's user agents of old Chrome versions but also that the Accept-Language header is never set, which is indeed something that no real Chromium browser would do. So I added a rule to my nginx config to return a 403 to these requests. The amount of these per second seems to have started declining.
The important thing is to be aware of your adversary. If it’s a big network which doesn’t care about you specifically, block away. But if it’s a motivated group interested in your site specifically, then you have to be very careful. The extreme example of the latter is yt-dlp, which continues to work despite YouTube’s best efforts.
For those adversaries, you need to work out a careful balance between deterrence, solving problems (e.g. resource abuse), and your desire to “win”. In extreme cases your best strategy is for your filter to “work” but be broken in hard to detect ways. For example, showing all but the most valuable content. Or spiking the data with just enough rubbish to diminish its value. Or having the content indexes return delayed/stale/incomplete data.
And whatever you do, don’t use incrementing integers. Ask me how I know.
In my particular case, I don't mind the crawling. It's a fediverse server. There is nothing secret there. All content is available via ActivityPub anyway for anyone to grab. However, these bots specifically violated both robots.txt and rel="nofollow" while hitting endpoints like "log in to like this post" pages tens of times per second. They were just wasting my server's resources for nothing.
It's been a few hours. These particular bots have completely stopped. There are still some bot-looking requests in the log, with a newer-version Chrome UA on both Mac and Windows, but there aren't nearly as many of them.
Config snippet for anyone interested:
That's a simple and effective way to block a lot of bots, gonna implement that on my sites. Thanks!
In the movie The Imitation Game, the Alan Turing character recognizes that acting 100% of the time gives away to the opposition that you identified them and sets off the next iteration of “cat and mouse”. He comes up with a specific percentage of the time that the Allies should sit on the intelligence and not warn their own people.
If, instead, you only act on a percentage of requests, you can add noise in an insidious way without signaling that you caught them. It will make their job troubleshooting and crafting the next iteration much harder. Also, making the response less predictable is a good idea - throw different HTTP error codes, respond with somewhat inaccurate content, etc
The vast majority of bots are still failing the header test - we organically arrived at the except same filtering in 2025. The bots followed the exact same progression too. One ip, lie about the user agent, one ASN, multiple ASNs, then lie about everything and use residential IPs, but still botch the headers
Why do the company names chase away bots? Is it just that you’re destroying their signal because they’re looking for mentions of those brands?
It’s both a destruction of signal and an injection of noise. Imagine you worked for Adidas and you started getting a stream of notifications about your brand, and they were all nonsense. This would be an annoyance and harm the reputation of that monitoring service.
They would have received multiple complaints about it from customers, performed an investigation, and ultimately perform a manual excision of the junk data from their system; both the raw scrapes and anywhere it was ingested and processed. This was probably a simple operation, but might not have been if their architecture didn’t account for this vulnerability.
I also didn't follow that part. Their step 2 seem to be a general-purpose bot detection strategy that works independently of their step 1 ("randomly mention companies").
It spams the bot with false-positives. Encourages the bot admins to denylist the site to protect the bot's signal:noise ratio.
3 replies →
[dead]