← Back to context

Comment by overfeed

10 hours ago

> LLMs have other ways of accessing the content, they don’t need the Web Archive.

What's the conclusion from this train if thought? Just because some burglars can pick locks doesn't mean you should leave your front door unlocked.

Locking a door (or robots.txt) is how one can establish mens rea for those who bypass the barrier.

This is like arguing that services can't provide access to libraries that provide public WiFi because it would give the public legal permission to pirate TV shows. They're two unrelated things. And then some members of the public argue that they're making fair use rather than pirating anything, but that still has nothing to do with the library.

But as I understand it, the Web Archive does respect robots.txt, while LLM scrapers absolutely do not and use all sorts of dodgy methods to get around it already...

The actual root cause is that we're allowing LLM companies to completely disregard copyright laws for their profit. Whether the LLM companies scrape the Web Archive or the original source doesn't change the copyright infringement implications in any way, and cutting off the web archive doesn't practically change anything (because as I understand, LLM scraping is already prolific all over the web).