← Back to context

Comment by PaulDavisThe1st

1 day ago

1. that doesn't appear to match the fetching patterns of the scrapers at all

2. 1M independent IPs hitting random commits from across a 25 year history is not, in fact, "easy to solve". It is addressable, but not easy ...

3. why should I have to do anything at all to deal with these scrapers? why is the onus not on them to do the right thing?

I did not imply that it does, I meant to have a budget allocated for 'unauthenticated deep history queries', when it's over it's over and you only handle dynamic fetching for authorized users until cooldown.

Is it pretty? No, but it also is a pretty niche thing overall (git repo storage).