← Back to context

Comment by francislavoie

1 day ago

Is anyone aware of something like this for MariaDB?

Something we've been trying to solve for a long time is having instant DB resets between acceptance tests (in CI or locally) back to our known fixture state, but right now it takes decently long (like half a second to a couple seconds, I haven't benchmarked it in a while) and that's by far the slowest thing in our tests.

I just want fast snapshotted resets/rewinds to a known DB state, but I need to be using MariaDB since it's what we use in production, we can't switch DB tech at this stage of the project, even though Postgres' grass looks greener.

I was able to accomplish this by doing each test within its own transaction session that gets rolled-back after each test. This way I'm allowed to modify the database to suit my needs for each test, then it gets magically reset back to its known state for the next test. Transaction rollbacks are very quick.

  • As a consultant, I saw many teams doing that and it works well.

    The only detail is that autoincrements (SEQUENCEs for PotgreSQL folks) gets bumped even if the transaction rollsback.

    So tables tend to get large ids quickly. But it's just dev database so no problem.

  • Unfortunately a lot of our tests use transactions themselves because we lock the user row when we do anything to ensure consistency, and I'm pretty sure nested transactions are still not a thing.

  • This doesn’t work for testing migrations because MySQL/MariaDB doesn’t support DDL inside transactions, unlike PostgreSQL.

    • Migrations are kind of a different beast. In that case I just stand up a test environment in Docker that does what it needs, then just trash it once things have been tested/verified.

You could use LVM or btrfs snapshots (at the filesystem level) if you're ok restarting your database between runs

  • Restarting the DB is unfortunately way too slow. We run the DB in a docker container with a tmpfs (in-memory) volume which helps a lot with speed, but the problem is still the raw compute needed to wipe the tables and re-fill them with the fixtures every time.

    • How about do the changes then bake them into the DB docker image. I.e. "docker commit".

      Then spin up the dB using that image instead of an empty one for every test run.

      This implies starting the DB through docker is faster than what you're doing now of course.

      1 reply →

    • I have not done this so it’s theorycrafting but can’t you do the following?

      1. Have a local data dir with initial state

      2. Create an overlayfs with a temporary directory

      3. Launch your job in your docker container with the overlayfs bind mount as your data directory

      4. That’s it. Writes go to the overlay and the base directory is untouched

      5 replies →

  • LVM snapshots work well. Used it for years with other database tools.. But make sure you allocate enough write space for the COW.. when the write space fills up, LVM just 'drops' the snapshot.