← Back to context

Comment by nijave

18 hours ago

While I do sympathetize with the AI DDoS situation, it'd be nice if there were a solution that allows them to work so they can pull official docs.

For instance, MCP, static sites that are easy to scale, a cache in front of a dynamic site engine

Of course, static websites is the best solution to that problem.

Our documentation and a main website are not fronted by this protection, so they're still accessible for the scrapers.