← Back to context

Comment by trebor

17 hours ago

Upvoted because we’re seeing the same behavior from all AI and Seo bots. They’re BARELY respecting Robots.txt, and hard to block. And when they crawl, they spam and drive up load so high they crash many servers for our clients.

If AI crawlers want access they can either behave, or pay. The consequence will almost universal blocks otherwise!

> The consequence will almost universal blocks otherwise!

How? The difficulty of doing that is the problem, isn't it? (Otherwise we'd just be doing that already.)

  • > (Otherwise we'd just be doing that already.)

    Not quite what the original commenter meant but: WE ARE.

    A major consequence of this reckless AI scraping is that it turbocharged the move away from the web and into closed ecosystems like Discord. Away from the prying eyes of most AI scrapers ... and the search engine indexes that made the internet so useful as an information resource.

    Lots of old websites & forums are going offline as their hosts either cannot cope with the load or send a sizeable bill to the webmaster who then pulls the plug.

What do you mean by "barely" respecting robots.txt? Wouldn't that be more binary? Are they respecting some directives and ignoring others?

  • I believe that a number of AI bots only respect robot.txt entries that explicitly define their static user agent name. They ignore wildcards in user agents.

    That counts as barely imho.

    I found this out after OpenAI was decimating my site and ignoring the wildcard deny all. I had to add entires specifically for their three bots to get them to stop.

  • Amazonbot doesn't respect the `Crawl-Delay` directive. To be fair, Crawl-Delay is non-standard, but it is claimed to be respected by the other 3 most aggressive crawlers I see.

    And how often does it check robots.txt? ClaudeBot will make hundreds of thousands of requests before it re-checks robots.txt to see that you asked it to please stop DDoSing you.

    • One would think they'd at least respect the cache-control directives. Those have been in the web standards since forever.

  • Here's Google, complaining of problems with pages they want to index but I blocked with robots.txt.

        New reason preventing your pages from being indexed
    
        Search Console has identified that some pages on your site are not being indexed 
        due to the following new reason:
    
            Indexed, though blocked by robots.txt
    
        If this reason is not intentional, we recommend that you fix it in order to get
        affected pages indexed and appearing on Google.
        Open indexing report
        Message type: [WNC-20237597]

Is there some way website can sell those Data to AI bot in a large zip file rather than being constantly DDoS?

Or they could at least have the curtesy to scrap during night time / off peak hours.

  • No, because they won't pay for anything they can get for free. There's only one situation where an AI company will pay for data, and that's when it's owned by someone with scary enough lawyers to pressure them into paying up. Hence why OpenAI has struck licensing deals with a handful of companies while continuing to bulk-scrape unlicensed data from everyone else.

  • Is existing intellectual property law not sufficient? Why aren't companies being prosecuted for large-scale theft?

> The consequence will almost universal blocks otherwise!

Who cares? They've already scraped the content by then.

  • Bold to assume that an AI scraper won't come back to download everything again, just in case there's any new scraps of data to extract. OP mentioned in the other thread that this bot had pulled 3TB so far, and I doubt their git server actually has 3TB of unique data, so the bot is probably pulling the same data over and over again.

    • FWIW that includes other scrapers, Amazon's is just the one that showed up the most in the logs.

  • If they only needed a one-time scrape we really wouldn't be seeing noticeable not traffic today.

Global tarpit is the solution. It makes sense anyway even without taking AI crawlers into account. Back when I had to implement that, I went the semi manual route - parse the access log and any IP address averaging more than X hits a second on /api gets a -j TARPIT with iptables [1].

Not sure how to implement it in the cloud though, never had the need for that there yet.

[1] https://gist.github.com/flaviovs/103a0dbf62c67ff371ff75fc62f...

If they're AI bots it might be fun to feed them nonsense. Just send hack megabytes of "Bezos is a bozo" or something like that. Even more fun if you could cooperate with many other otherwise-unrelated websites, e.g. via time settings in a modified tarpit.