← Back to context

Comment by kilburn

1 year ago

I don't think this is worth it unless you are setting up your own CDN or similar. In the article, they exchange 1 to 4 stat calls for:

- A more complicated nginx configuration. This is no light matter. You can see in the comments that even the author got bugs in their first try. For instance, introducing an HSTS header now means you have to remember to do it in all those locations.

- Running a few regexes per request. This is probably still significantly cheaper than the stat calls, but I can't tell by how much (and the author hasn't checked either).

- Returning the default 404 page instead of the CMS's for any URL in the defined "static prefixes". This is actually the biggest change, both in user-visible behavior and in performance (particularly if a crazy crawler starts checking non-existing URLs ni bulk or similar). The article doesn't even mention this.

The performance gains for regular accesses are purely speculative because the author didn't make any effort to try and quantify them. If somebody has quantified the gains I'd love to hear about it though.

I agree. But on that final point, I have to say i hate setups where bots hitting thousands of non-existent addresses have every one of them going to a dynamic backed to produce a 404. A while back I made a rails setup that dumped routes to an nginx map of valid first level paths, but I haven't seen anyone else do that sort of thing.

  • See Varnish Cache others... Or use a third party CDN that offers feature.

    Lots of ways to configure them with route based behavior-in/validation.

  • I've been thinking about that exact problem and solution with the map module. On the off chance you see this, do you happen to have your solution published somewhere?