← Back to context

Comment by dkarl

8 hours ago

That's pretty cool, and now I'm curious if there's something similar for ScalaCheck. My comment comes from my own experience, though, introducing Hypothesis and ScalaCheck into codebases and quickly causing noticeable increases in unit test times. I think the additional runtime for tests is undoubtedly worth it, but maybe not a good trade-off when people are used to running unit tests several times an hour as part of their development cycle. To avoid people saying, "Running four minutes of tests five times per hour is ruining my flow and productivity," I make sure they have a script or command to run a subset of basic, less comprehensive tests, or to only run the tests relevant to the changes they've made.

Or a watch command that runs tests in the background on save, and an IDE setting to flag code when the watched tests produce a failure. Get used to that, and it's not even a matter of stopping to run the tests: they run every time you hit Ctrl-S, and you just keep on typing — and every so often the IDE notifies you of a failed test.

The drawback is that you might get used to saying "Well, of course the tests failed, I'm in the middle of refactoring and haven't finished yet" and ignoring the failed-test notifications. But just like how I'm used to ignoring typecheck errors until I finish typing but then I look at them and see if they're still there, you probably won't get used to ignoring test failures all the time. Though having a five-minute lag time between "I'm finished typing, now those test errors should go away" and having them actually go away might be disconcerting.

  • You can also set things up so that you only run 10 examples per test when doing a quick check during development, but your CI runs the full 200 examples per test (or even more).