← Back to context

Comment by ndriscoll

19 days ago

Beyond something like running an ML model, what web pages are expensive (enough that 1-10 requests/second matters at all) to generate these days?

I've worked on multiple sites like this over my career.

Our pages were expensive to generate, so what scraping did is blew out all our caches by yanking cold pages/images into memory. Page caches, fragment caches, image caches, but also the db working set in ram, making every single thing on the site slow.

Usually ones that are written in a slow language, do lots of IO to other webservices or databases in a serial, blocking fashion, maybe don't have proper structure or indices in their DBs, and so on. I have seen some really terribly performing spaghetti web sites, and have experience with them collapsing under scraping load. With a mountain of technical debt in the way it can even be challenging to fix such a thing.

  • Even if you're doing serial IO on a single thread, I'd expect you should be able to handle hundreds of qps. I'd think a slow language wouldn't be 1000x slower than something like functional scala. It could be slow if you're missing an index, but then I'd expect the thing to barely run for normal users; scraping at 2/s isn't really the issue there.

Run a mediawiki, as described in the post. It's very heavy. Specifically for history I'm guessing it has to re-parse the entire page and do all link and template lookups because previous versions of the page won't be in any cache

  • The original post says it's not actually a burden though; they just don't like it.

    If something is so heavy that 2 requests/second matters, it would've been completely infeasible in say 2005 (e.g. a low power n100 is ~20x faster than the athlon xp 3200+ I used back then. An i5-12600 is almost 100x faster. Storage is >1000x faster now). Or has mediawiki been getting less efficient over the years to keep up with more powerful hardware?

    • Oh, I was a bit off. They also indexed diffs

      > And I mean that - they indexed every single diff on every page for every change ever made. Frequently with spikes of more than 10req/s. Of course, this made MediaWiki and my database server very unhappy, causing load spikes, and effective downtime/slowness for the human users.

      2 replies →