Comment by jerf

7 months ago

You want to consider the ratio of your resource consumption to their resource consumption. If you trickle bytes from /dev/random, you are holding open a TCP connection with some minimal overhead, and that's about what they are doing too. Let's assume they are bright enough to use any of the many modern languages or frameworks that can easily handle 10K/100K connections or more on a modern system. They aren't all that bright but certainly some are. You're basically consuming your resources to their resources 1:1. That's not a winning scenario for you.

The gzip bomb means you serve 10MB but they try to consume vast quantities of RAM on their end and likely crash. Much better ratio.

Also might open up a new DoS vector on entropy consumed by /dev/random so it can be worse than 1:1.

  • Entropy doesn't really get "consumed" on modern systems. You can read terabytes from /dev/random without running out of anything.

  • As mentioned, not really an issue on a modern system. But in any case, you could just read, say, 1K from /dev/urandom into a buffer and then keep resending that buffer over and over again?

That's clear. It all comes down to their behavior. Will they sit there waiting to finish this download, or just start sending other requests in parallel until you dos yourself? My hope is they would flag the site as low-value and go looking elsewhere, on another site.