← Back to context

Comment by nijave

1 year ago

Small/medium SaaS. Had ~8 hours of 100k reqs/sec last year when we usually see 100-150 reqs/sec. Moved everything behind a Cloudflare Enterprise setup and ditched AWS Client Access VPN (OpenVPN) for Cloudflare WARP

I've only been here 1.5 years but sounds like we usually see 1 decent sized DDoS a year plus a handful of other "DoS" usually AI crawler extensions or 3rd parties calling too aggressively

There are some extensions/products that create a "personal AI knowledge base" and they'll use the customers login credentials and scrape every link once an hour. Some links are really really resource intensive data or report requests that are very rare in real usage

Did you put rate limiting rules on your webserver?

Why was that not enough to mitigate the DDoS?

  • Not the same poster, but the first "D" in "DDoS" is why rate-limiting doesn't work - attackers these days usually have a _huge_ (tens of thousands) pool of residential ip4 addresses to work with.

  • We had rate limiting with Istio/Envoy but Envoy was using 4-8x normal memory processing that much traffic and crashing.

    The attacker was using residential proxies and making about 8 requests before cycling to a new IP.

    Challenges work much better since they use cookies or other metadata to establish a client is trusted then let requests pass. This stops bad clients at the first request but you need something more sophisticated than a webserver with basic rate limiting.

    • > The attacker was using residential proxies and making about 8 requests before cycling to a new IP.

      So how is Cloudflare supposed to distinguish legitimate new visitors from new attack IPs if you can't?

      Because it matches my experience as a cloudflare user perfectly if the answer were "they can't"

      1 reply →

  • That might have been good for preventing someone from spamming your HotScripts guestbook in 2005, but not much else.