Comment by voidnap

2 days ago

The proof of work isn't really the crux. They've been pretty clear about this from the beginning.

I'll just quote from their blog post from January.

https://xeiaso.net/blog/2025/anubis/

Anubis also relies on modern web browser features:

- ES6 modules to load the client-side code and the proof-of-work challenge code.

- Web Workers to run the proof-of-work challenge in a separate thread to avoid blocking the UI thread.

- Fetch API to communicate with the Anubis server.

- Web Cryptography API to generate the proof-of-work challenge.

This ensures that browsers are decently modern in order to combat most known scrapers. It's not perfect, but it's a good start.

This will also lock out users who have JavaScript disabled, prevent your server from being indexed in search engines, require users to have HTTP cookies enabled, and require users to spend time solving the proof-of-work challenge.

This does mean that users using text-only browsers or older machines where they are unable to update their browser will be locked out of services protected by Anubis. This is a tradeoff that I am not happy about, but it is the world we live in now.

Except this is exactly the problem. Now you are checking for mainstream browsers instead of some notion of legitimate users. And as TFA shows a motivated attacker can bypass all of that while legitimate users of non-mainstream browsers are blocked.

Aren't most scrapers using things like Playright or Puppeteer anyway by now, especially since so many pages are rendered using JS and even without Anubis would be unreadable without executing modern JS?

... except when you do not crawl with a browser at all. It's so trivial to solve just like the taviso post demostrated.

This makes zero sense, this is simply the wrong approach. Already tired of saying so and been attacked. So I'm glad professional-random-Internet-bullshit-ignorer Tavis Ormandy wrote this one.