Comment by logicprog
8 days ago
I'm not anti-the-tech-behind-AI, but this behavior is just awful, and makes the world worse for everyone. I wish AI companies would instead, I don't know, fund common crawl or something so that they can have a single organization and set of bots collecting all the training data they need and then share it, instead of having a bunch of different AI companies doing duplicated work and resulting in a swath of duplicated requests. Also, I don't understand why they have to make so many requests so often. Why wouldn't like one crawl of each site a day, at a reasonable rate, be enough? It's not like up to the minute info is actually important since LLM training cutoffs are always out of date anyway. I don't get it.
Greed. It's never enough money, never enough data, we must have everything all the time and instantly. It's also human nature it seems, looking at how we consume like there's no tomorrow.
It doesn't even make sense to crawl this way. It's just destructive for almost no beinifit.
Maybe they assume there'll be only one winner and think, "what if this gives me an edge over the others". And money is no object. Imagine if they cared about "the web".
That's what's annoying and confusing about it to me.
Which is why internalizing externalities is so important, but that's also extremely hard to do right (leads to a lot of "nerd harder" problems).
This isn't AI. This is corporations doing things because they have a profit motive. The issue here is the non-human corporations and their complete lack of accountability even if someone brings legal charges against them. Their structure is designd to abstract away responsibility and they behave that way.
Same old problem. Corps are gonna corp.
Yeah, that's why I said I'm not against AI as a technology, but against the behavior of the corporations currently building it. What I'm confused by (not really confused, I understand its just negligence and not giving a fuck, but, frustrated and confused in a sort of helpless sense of not being able to get into the mindset) is just that while there isn't a profit motive against doing this (obviously) there's also not clearly a profit motive to do it, it seems like they're wasting their own resources too on unnecessarily frequent data collection, and also it'd be cheaper to pool data collection efforts.
The time to regulate tech was like 15 years ago, and we didn't. Why would any tech company expect to have to start following "rules" now?
Yeah, I don't think we can regulate this problem away personally. Because whatever regulations will be made will either be technically impossible and nonsensical products of people who don't understand what they're regulating that will produce worse side effects (@simonw extracted a great quote from recent Doctorow post on this: https://simonwillison.net/2025/Aug/14/cory-doctorow/) or just increase regulatory capture and corporate-state bonds, or even facilitate corp interests, because the big corps are the ones with economic and lobbying power.
> fund common crawl or something so that they can have a single organization and set of bots collecting all the training data they need and then share it
That, or, they could just respect robots.txt and we could put enforcement penalties for not respecting the web service's request to not be crawled. Granted, we probably need a new standard but all these AI companies are just shitting all over the web, being disrespectful of site owners because who's going to stop them? We need laws.
> That, or, they could just respect robots.txt
IMO, if digital information is posted publicly online, it's fair game to be crawled unless that crawl is unreasonably expensive or takes it down for others, because these are non rivalrous resources that are literally already public.
> we could put enforcement penalties for not respecting the web service's request to not be crawled... We need laws.
How would that be enforceable? A central government agency watching network traffic? A means of appealing to a bureaucracy like the FCC? Setting it up so you can sue companies that do it? All of those seem like bad options to me.
> IMO, if digital information is posted publicly online, it's fair game to be crawled unless that crawl is unreasonably expensive or takes it down for others, because these are non rivalrous resources that are literally already public.
I disagree. Whether or not content should be available to be crawled is dependent on the content's license, and what the site owner specifies in robots.txt (or, in the case of user submitted content, whatever the site's ToS allows)
It should be wholly possible to publish a site intended for human consumption only.
> How would that be enforceable?
Making robots.txt or something else a legal standard instead of a voluntary one. Make it easy for site owners to report violations along with logs, legal action taken against the violators.
2 replies →
> unless that crawl is unreasonably expensive or takes it down for others
This _is_ the problem Anubis is intended to solve -- forges like Codeberg or Forgejo, where many routes perform expensive Git operations (e.g. git blame), and scrapers do not respect the robots.txt asking them not to hit those routes.
laws are inherently national, which the internet is not. by all means write a law that crawlers need to obey robots.txt, but how are you going to make russia or china follow that law?
if those companies cared about acting in good faith, they wouldnt be in AI