Comment by rkangel
5 years ago
I used to think this and have come to realise that this is definitely not true. The problem that a thorough automated test suite can cause is that it becomes very painful to refactor code.
As you add code, the best structure for that code changes and you want to refactor. I'm not just talking here about pulling some shared code into a new function, I'm talking about moving responsibilities between modules, changing which data lives in which data structures etc. These changes are the key to ensuring your code stays maintainable, and makes sense. Every unit test you add 'pins' the boundary of your module (or class or whatever is appropriate to your language). If you have lots of tests with repeated code, it can take 5 times as long to fix the tests as it can to make the actual refactors. This either means that refactors are painful which usually means that people don't do them as readily (because the subconscious cost-benefit analysis is shifted).
If - on the other hand - you treat your test suite as a bit of software to be designed and maintained like any other, then you improve this situation. Multiple tests hitting the same interface are probably doing it through a common helper function that you can adjust in one place, rather than in 20 tests. Your 'fixtures' live in one place that can be updated and are reused in multiple places. This usually means that your test suite helps more with the transition too - you get more confidence you've refactored correctly.
The other part of this problem (which is maybe more controversial?) is that I try not to rely too much on lots of unit tests, and lean more on testing sets of modules together. These tests prove that modules interact with each other correctly (which unit tests do not), and are also changed less when you refactor (and give confidence you didn't break anything when you refactor).
I was mostly referring to integration tests. And yes, there are basics like fixtures which do get DRY'd out but they really need to be as unambiguous as possible in their mental model `insert(<table>,[<c:v>])` for a database entry, e.g.
I guess my point was not that you never DRY in tests, just that you should be very picky about when to DRY, more so than in code, and that is necessarily in opposition to the advice in OP.