← Back to context

Comment by bayindirh

1 day ago

This interpretation won't take you that far.

Crawling-prevention is not new. Many news outlets or biggish websites already was preventing access by non-human agents in various ways for a very long time.

Now, non-human agents are improved and started to leech everything they can find, so the methods are evolving, too.

News outlets are also public sites on the public internet.

Source-available code repositories are also on the public internet, but said agents crawl and use that code, too, backed by fair-use claims.