← Back to context

Comment by scott_w

4 days ago

A Django+Celery app behind Nginx back in the day. Most maintenance would be discovering a new failure mode:

- certificates not being renewed in time

- Celery eating up all RAM and having to be recycled

- RabbitMQ getting blocked requiring a forced restart

- random issues with Postgres that usually required a hard restart of PG (running low on RAM maybe?)

- configs having issues

- running out of inodes

- DNS not updating when upgrading to a new server (no CDN at the time)

- data centre going down, taking the provider’s email support with it (yes, really)

Bear in mind I’m going back a decade now, my memory is rusty. Each issue was solvable but each would happen at random and even mitigating them was time that I (a single dev) was not spending on new features or fixing bugs.

I mean, going back a decade might be part of the reason?

Configs having issues is like number 1 reason i like the setup so much..

I can configure everything on my local machine and test here, and then just deploy it to a server the same way.

I do not have to build a local setup, and then a remote one

  • Er… what? Even in today’s world with Docker, you have differences between dev and prod. For a start, one is accessed via the internet and requires TLS configs to work correctly. The other is accessed via localhost.

    • Just fyi, you can put whatever you want in /etc/hosts, it gets hit before the resolver. So you can run your website on localhost with your regular host name over https.

      1 reply →

    • I use a https for localhost, there are a ton of options for that.

      But yes, the cert is created differently in prod and there are a few other differences.

      But it's much closer then in the cloud.