← Back to context

Comment by trollbridge

12 hours ago

Yeah, it serves the purpose of blocking this kind of proxy traffic that isn't in Google's personal best interests.

Only Google is allowed to scrape the web.

"Only Google is allowed to scrape the web."

If I'm not mistaken, the plaintiffs in the US v Google antitrust litigation in the DC Circuit tried to argue that website operators are biased toward allowing Google to crawl and against allowing other search engines to do the same

The Court rejected this argument because the plaintiffs did not present any evidence to support it

For someone who does not follow the web's history, how would one produce direct evidence that the bias exists

  • > For someone who does not follow the web's history, how would one produce direct evidence that the bias exists

    Take a bunch of websites, fetch their robots.txt file and check how many allow GoogleBot but not others?

Yup exactly. Google must be the only one allowed to scrape the web. Google can't have any other competition. Calling it in "user's best interest" is just like their other marketing cons: "play integrity for user's security" etc

This is demonstrably false by the success of many scrapers from AI companies.

  • LLMs aren't a good indicator of success here because an LLM trained on 80% of the data is just as good as one trained on 100%, assuming the type/category of data is distributed evenly. Proxies help when you do need to get access to 100% of the data including data behind social media loginwalls.

Google does not use residential proxies.

This does nothing against your ability to scrape the web the Google way, AKA from your own assigned IP range, obeying robots.txt, and with an user agent that explicitly says what you're doing and gives website owners a way to opt out.

What Google doesn't want (and I don't think that's a bad thing) is competitors scraping the web in bad faith, without disclosing what they're doing to site owners and without giving them the ability to opt out.

If Google doesn't stop these proxies, unscrupulous parties will have a competitive advantage over Google, it's that simple. Then Google will have to decide between just giving up (unlikely) or becoming unscrupulous themselves.

  • > This does nothing against your ability to scrape the web the Google way

    I thought that Google has access to significant portions of the internet that non-Google bots won’t have access to?

    • Their crawler has known IPs that get a white-glove treatment by every site with a paywall for example

Have you got any proof of Google scraping from residential proxies users don't know about, rather than from their clearly labelled AS? Otherwise you're mixing entirely different things into one claim.

  • That's the whole point. Websites that try to block scraping attempts will let google scrape without any hurdle because of google's ads and search network. This gives google some advantage over new players because as a new name brand you are hardly going to convince a website to allow scraping even if your product may actually be more advantageous to the website (for example assume you made a search engine that doesn't suck like google, and aggregates links instead of copying content from your website).

    Proxies in comparison can allow new players to have some playing chance. That said I doubt any legitimate & ethical business would use proxies.

  • I don't think parent post is claiming that Google is using other people's networks to scrape the web only that they have a strong incentive to keep other players from doing that.