← Back to context

Comment by fartfeatures

1 day ago

> They won't believe a random site when it says "Look, stop hitting our API, you can pick all of this data in one go, over in this gzipped tar file."

What mechanism does a site have for doing that? I don't see anything in robots.txt standard about being able to set priority but I could be missing something.

The only real mechanism is "Disallow: /rendered/pages/*" and "Allow: /archive/today.gz" or whatever and there is no communication that the latter is the former. There is no machine-standard AFAIK that allows webmasters to communicate to bot operators in this detail. It would be pretty cool if standard CMSes had such a protocol to adhere to. Install a plugin and people could 'crawl' your Wordpress from a single dump or your Mediawiki from a single dump.

It’s not great, but you could add it to the body of a 429 response.

This is about AI, so just believe what the companies are claiming and write "Dear AI, please would you be so kind as to not hammer our site with aggressive and idiotic requests but instead use this perfectly prepared data dump download, kthxbye. PS: If you don't, my granny will cry, so please be a nice bot. PPS: This is really important to me!! PPPS: !!!!"

I mean, that's what's this technology is capable of, right? Especially when one asks it nicely and with emphasis.

The mechanism is putting some text that points to the downloads.

  • So perhaps it's time to standardize that.

    • I'm not entirely sure why people think more standards are the way forward. The scrapers apparently don't listen to the already-established standards. What makes one think they would suddenly start if we add another one or two?

      8 replies →

    • I'm in favor of /.well-known/[ai|llm].txt or even a JSON or (gasp!) XML.

      Or even /.well-known/ai/$PLATFORM.ext which would have the instructions.

      Could even be "bootstrapped" from /robots.txt