Comment by proaralyst

1 day ago

You could use LVM or btrfs snapshots (at the filesystem level) if you're ok restarting your database between runs

Restarting the DB is unfortunately way too slow. We run the DB in a docker container with a tmpfs (in-memory) volume which helps a lot with speed, but the problem is still the raw compute needed to wipe the tables and re-fill them with the fixtures every time.

  • How about do the changes then bake them into the DB docker image. I.e. "docker commit".

    Then spin up the dB using that image instead of an empty one for every test run.

    This implies starting the DB through docker is faster than what you're doing now of course.

  • I have not done this so it’s theorycrafting but can’t you do the following?

    1. Have a local data dir with initial state

    2. Create an overlayfs with a temporary directory

    3. Launch your job in your docker container with the overlayfs bind mount as your data directory

    4. That’s it. Writes go to the overlay and the base directory is untouched

    • But how does the reset happen fast, the problem isn't with preventing permanent writes or w/e, it's with actually resetting for the next test. Also using overlayfs will immediately be slower at runtime than tmpfs which we're already doing.

      4 replies →

LVM snapshots work well. Used it for years with other database tools.. But make sure you allocate enough write space for the COW.. when the write space fills up, LVM just 'drops' the snapshot.