Comment by francislavoie

1 day ago

Restarting the DB is unfortunately way too slow. We run the DB in a docker container with a tmpfs (in-memory) volume which helps a lot with speed, but the problem is still the raw compute needed to wipe the tables and re-fill them with the fixtures every time.

How about do the changes then bake them into the DB docker image. I.e. "docker commit".

Then spin up the dB using that image instead of an empty one for every test run.

This implies starting the DB through docker is faster than what you're doing now of course.

I have not done this so it’s theorycrafting but can’t you do the following?

1. Have a local data dir with initial state

2. Create an overlayfs with a temporary directory

3. Launch your job in your docker container with the overlayfs bind mount as your data directory

4. That’s it. Writes go to the overlay and the base directory is untouched

  • But how does the reset happen fast, the problem isn't with preventing permanent writes or w/e, it's with actually resetting for the next test. Also using overlayfs will immediately be slower at runtime than tmpfs which we're already doing.

    • Resetting is free if you discard the overlayfs writes, no? I am not sure if one can discard at runtime, or if the next test should be run in a new container. But that should still be fast.

      If your db is small enough to fit in tmpfs, than sure, that is hard to beat. But then xfs and zfs are overkill too.

      EDIT: I see you mentioning that starting the db is slow due to wiping and filling at runtime. But the idea of a snapshot is that you don't have to do that, unless I misunderstand you.