Comment by account42
2 days ago
The question then is why read only users are consuming so much resources that serving them big chunks of JS instead reduces loads of the server. Maybe improve you rendering and/or caching before employing DRM solutions that are doomed to fail anyway.
The problem it's originally fixing is bad scrapers accessing dynamic site content that's expensive to produce, like trying to crawl all diffs in a git repo, or all mediawiki oldids. Now it's also used on mostly static content because it is effective vs scrapers that otherwise ignore robots.txt.