← Back to context

Comment by fardinahsan

1 year ago

Can you go in some high level details as to why it was slow and what you did to make it fast. That's always the most interesting part of a post like this.

The db connections were poorly managed. Each query started by starting a new connection. It then checked if it failed, slept, retried, then ran. There were several queries called in loops, so the connection pool was always dry. The code spent most of it's time sleeping. Then the queries themselves were highly inefficient, bad joins, no indexes, etc. It was satisfying to fix the mess.

  • I had two similar experiences in the past years: I added some database indices, the application become super fast, the rest of the team starts acting weird, I get shown the door.

    At this point I'm pretty sure anything IT relate became a bullshit job.

I've done something similar, you start out with something naive and simple like mysqldump that takes an age, and move on to more specialised tools like Percona XtraBackup that allows for incremental backups.

  • Any tips on similar tools for Postgres?

    • Depends on environment. If you can do disk snapshots that's the way to go (can be hard with disk striping). wal-g works for storing both base backups & wal to various storages in parallel & can be throttled with env variables

      Source: worked on Azure's managed Citus pg