Comment by thayne
3 years ago
One issue I've seen with this is that if you have a single, very large database, it can take a very, very long time to restore from backups. Or for that matter just taking backups.
I'd be interested to know if anyone has a good solution for that.
Here's the way it works for, say, Postgresql:
- you rsync or zfs send the database files from machine A to machine B. You would like the database to be off during this process, which will make it consistent. The big advantage of ZFS is that you can stop PG, snapshot the filesystem, and turn PG on again immediately, then send the snapshot. Machine B is now a cold backup replica of A. Your loss potential is limited to the time between backups.
- after the previous step is completed, you arrange for machine A to send WAL files to machine B. It's well documented. You could use rsync or scp here. It happens automatically and frequently. Machine B is now a warm replica of A -- if you need to turn it on in an emergency, you will only have lost one WAL file's worth of changes.
- after that step is completed, you give machine B credentials to login to A for live replication. Machine B is now a live, very slightly delayed read-only replica of A. Anything that A processes will be updated on B as soon as it is received.
You can go further and arrange to load balance requests between read-only replicas, while sending the write requests to the primary; you can look at Citus (now open source) to add multi-primary clustering.
This isn't really a backup, it's redundancy which is good thing but not the same as a backup solution. You can't get out of a drop table production type event this way.
The previous commenter was probably unaware of the various way to backup recent postgresql release.
For what you describe a "point in time recovery" backup would probably be the more adequate flavor https://www.postgresql.org/docs/current/continuous-archiving...
It was first release around 2010 and gained robustness with every release hence not everyone is aware of it.
The for instance I don't think it's really required anymore to shutdown the database to do the initial sync if you use the proper tooling (for instance pg_basebackup if I remember correctly)
1 reply →
Going back 20 years with Oracle DB it was common to use "triple mirror" on storage to make a block level copy of the database. Lock the DB for changes, flush the logs, break the mirror. You now have a point in time copy of the database that could be mounted by a second system to create a tape backup, or as a recovery point to restore.
It was the way to do it, and very easy to manage.
If you add a delay of say 30 minutes for one of your replicas, you have another option in a "drop table" type event.
If you stop at the first bullet point then you have a backup solution.
6 replies →
Do you even have to stop Postgres if using ZFS snapshots? ZFS snapshots are atomic, so I’d expect that to be fine. If it wasn’t fine, that would also mean Postgres couldn’t handle power failure or other sudden failures.
You have choices.
* shut down PG. Gain perfect consistency.
* use pg_dump. Perfect consistency at the cost of a longer transaction. Gain portability for major version upgrades.
* Don't shut down PG: here's what the manual says:
However, a backup created in this way saves the database files in a state as if the database server was not properly shut down; therefore, when you start the database server on the backed-up data, it will think the previous server instance crashed and will replay the WAL log. This is not a problem; just be aware of it (and be sure to include the WAL files in your backup). You can perform a CHECKPOINT before taking the snapshot to reduce recovery time.
* Midway: use SELECT pg_start_backup('label', false, false); and SELECT * FROM pg_stop_backup(false, true); to generate WAL files while you are running the backup, and add those to your backup.
Presumably it doesn't matter if you break your DB up into smaller DBs, you still have the same amount of data to back up no matter what. However, now you also have the problem of snapshot consistency to worry about.
If you need to backup/restore just one set of tables, you can do that with a single DB server without taking the rest offline.
> you still have the same amount of data to back up no matter what
But you can restore/back up the databases in parallel.
> If you need to backup/restore just one set of tables, you can do that with a single DB server without taking the rest offline.
I'm not aware of a good way to restore just a few tables from a full db backup. At least that doesn't require copying over all the data (because the backup is stored over the network, not on a local disk). And that may be desirable to recover from say a bug corrupting or deleting a customer's data.
Try out pg_probackup. It works on database files directly. Restore is as fast as you can write on your ssd.
I've setup a pgsql server with timescaledb recently. Continuing backup based on WAL takes seconds each hour and a complete restore takes 15 minutes for almost 300 GB of data because the 1 GBit connection to the backup server is the bottleneck.
For MySQL there is xtrabackup - https://www.percona.com/software/mysql-database/percona-xtra....
On mariadb you can tell the replica to enter into a snapshotable state[1] and take a simple lvm snapshot, tell the the database it's over, backup your snapshot somewhere else and finally delete the snapshot.
1) https://mariadb.com/kb/en/storage-snapshots-and-backup-stage...
I found this approach pretty cool in that regard: https://github.com/pgbackrest/pgbackrest
Not a solution but using event sourcing would have prevented this.