Comment by trebor
17 hours ago
Upvoted because we’re seeing the same behavior from all AI and Seo bots. They’re BARELY respecting Robots.txt, and hard to block. And when they crawl, they spam and drive up load so high they crash many servers for our clients.
If AI crawlers want access they can either behave, or pay. The consequence will almost universal blocks otherwise!
> The consequence will almost universal blocks otherwise!
How? The difficulty of doing that is the problem, isn't it? (Otherwise we'd just be doing that already.)
> (Otherwise we'd just be doing that already.)
Not quite what the original commenter meant but: WE ARE.
A major consequence of this reckless AI scraping is that it turbocharged the move away from the web and into closed ecosystems like Discord. Away from the prying eyes of most AI scrapers ... and the search engine indexes that made the internet so useful as an information resource.
Lots of old websites & forums are going offline as their hosts either cannot cope with the load or send a sizeable bill to the webmaster who then pulls the plug.
What do you mean by "barely" respecting robots.txt? Wouldn't that be more binary? Are they respecting some directives and ignoring others?
I believe that a number of AI bots only respect robot.txt entries that explicitly define their static user agent name. They ignore wildcards in user agents.
That counts as barely imho.
I found this out after OpenAI was decimating my site and ignoring the wildcard deny all. I had to add entires specifically for their three bots to get them to stop.
Even some non-profit ignore it now, Internet Archive stopped respecting it years ago: https://blog.archive.org/2017/04/17/robots-txt-meant-for-sea...
6 replies →
This is highly annoying and rude. Is there a complete list of all known bots and crawlers?
1 reply →
Amazonbot doesn't respect the `Crawl-Delay` directive. To be fair, Crawl-Delay is non-standard, but it is claimed to be respected by the other 3 most aggressive crawlers I see.
And how often does it check robots.txt? ClaudeBot will make hundreds of thousands of requests before it re-checks robots.txt to see that you asked it to please stop DDoSing you.
One would think they'd at least respect the cache-control directives. Those have been in the web standards since forever.
Here's Google, complaining of problems with pages they want to index but I blocked with robots.txt.
Is there some way website can sell those Data to AI bot in a large zip file rather than being constantly DDoS?
Or they could at least have the curtesy to scrap during night time / off peak hours.
No, because they won't pay for anything they can get for free. There's only one situation where an AI company will pay for data, and that's when it's owned by someone with scary enough lawyers to pressure them into paying up. Hence why OpenAI has struck licensing deals with a handful of companies while continuing to bulk-scrape unlicensed data from everyone else.
There is project whose goal is to avoid this crawling-induced DDoS by maintaining a single web index: https://commoncrawl.org/
Is existing intellectual property law not sufficient? Why aren't companies being prosecuted for large-scale theft?
> The consequence will almost universal blocks otherwise!
Who cares? They've already scraped the content by then.
Bold to assume that an AI scraper won't come back to download everything again, just in case there's any new scraps of data to extract. OP mentioned in the other thread that this bot had pulled 3TB so far, and I doubt their git server actually has 3TB of unique data, so the bot is probably pulling the same data over and over again.
FWIW that includes other scrapers, Amazon's is just the one that showed up the most in the logs.
If they only needed a one-time scrape we really wouldn't be seeing noticeable not traffic today.
That's the spirit!
Global tarpit is the solution. It makes sense anyway even without taking AI crawlers into account. Back when I had to implement that, I went the semi manual route - parse the access log and any IP address averaging more than X hits a second on /api gets a -j TARPIT with iptables [1].
Not sure how to implement it in the cloud though, never had the need for that there yet.
[1] https://gist.github.com/flaviovs/103a0dbf62c67ff371ff75fc62f...
One such tarpit (Nepenthes) was just recently mentioned on Hacker News: https://web.archive.org/web/20250117030633/https://zadzmo.or...
Quixotic[0] (my content obfuscator) includes a tarpit component, but for something like this, I think the main quixotic tool would be better - you run it against your content once, and it generates a pre-obfuscated version of it. It takes a lot less of your resources to serve than dynamically generating the tarpit links and content.
0 - https://marcusb.org/hacks/quixotic.html
How do you know their site is down? You probably just hit their tarpit. :)
i would think public outcry by influencers on social media (such as this thread) is a better deterrent, and also establishes a public datapoint and exhibit for future reference.. as it is hard to scale the tarpit.
This doesn't work with the kind of highly distributed crawling that is the problem now.
Don't we have intellectual property law for this tho?
If they're AI bots it might be fun to feed them nonsense. Just send hack megabytes of "Bezos is a bozo" or something like that. Even more fun if you could cooperate with many other otherwise-unrelated websites, e.g. via time settings in a modified tarpit.
Don't worry, though, because IP law only applies to peons like you and me. :)