← Back to context

Comment by neoromantique

2 days ago

that's a pretty niche issue, but fairly easy to solve.

Prebuild statically the most common commits (last XX) and heavily rate limit deeper ones

1. that doesn't appear to match the fetching patterns of the scrapers at all

2. 1M independent IPs hitting random commits from across a 25 year history is not, in fact, "easy to solve". It is addressable, but not easy ...

3. why should I have to do anything at all to deal with these scrapers? why is the onus not on them to do the right thing?