Comment by acroback
6 years ago
TDD need to die. This is a curse.
There should be integration tests along with some property based tests and fuzzy tests. Usually catches a lot of things.Invest in monitoring and alerts too.
TDD is like relying on debugger to solve your problem. Is debugger a good tool? yes,it is a great tool. But using it as an excuse to avoid understanding what happens under the hood is plain wrong.
The problem lies in industry where software engineering is not given any value but whoteboarding and solving puzzles is.
Software engineering is a craft honed over years of making mistakes and learning from them. You want code asap, kick experience engineers get codemonkeys in and get a MVP.
Quality is not clever algorithm, but clear conscise logic. Code should follow the logic, not the other way around.
Clear > clever.
And yet tests seem to have made this massive garbage heap actually work and enable a lump of spaghetti to continue to operate as a viable product. It doesn't mean you should write bad code, but it seems like if it can make even the most awful of code viable, then that's a pretty good system. The fact that modern medicine allows the most beat up and desperate to continue to live on isn't an indictment against medicine, it's a testament to it. Don't write bad code, sure. We can all agree to that. Don't prioritize testing? Why? To intentionally sabotage yourself so that you're forced to rewrite it from scratch or go out of business?
Depends on the definition of viable.
I’m sympathetic but this is too strong: what needs to die is dogma. TDD as a way of thinking about the API you’re writing is good but anything will become a problem if you see it as a holy cause rather than a tool which is good only to the extent that it delivers results.
I very much agree.
I remember when i realized that TDD shouldn't have such weight in our development as it had gotten (when it was high on the hype curve).
It was when we starting using a messaging infrastructure that made everything much more reliable and robust, and trough which we could start trusting the infrastructure much more (not 100% though, of course).
It made me realize that the reason why we did this excessively large amount of tests (1800+) was because the fragile nature of a request/response-based system and we therefore "had to make sure everything worked".
What I'm trying to get at here is thar TDD assumed the role of a large safety net to a problem we should have addressed in a different manner. After introducing the messaging, we could replay messages that had failed. After this huge turning point tests were only used for what they should have only been used for - ensuring predictable change in core functionality.
(our code also became easier to understand and more modular, but that's for another time...)
What you allude to there is pretty bad TDD. It was never intended as a replacement for good design, rather as an aid to be clear about design and requirements without writing tons of specs up-front.
And I agree, that there are lots of anti-patterns that have grown in tandem with TDD, like excessive mocking with dependency injection frameworks or testing renamed identity functions over and over just to get more coverage. However, I'd argue that is equally the fault of object-oriented programming though.
Where I disagree is this: TDD and unit tests are still a very useful tool. Their big advantage is that you can isolate issues more quickly and precisely, IF you use them correctly.
For instance, if I have some kind of algorithm in a backend service operating on a data structure that has a bug, I do not want to spend time on the UI layer, network communication or database interactions to figure out, what is going on. Testing at the right scope you get exactly that.
The problem with TDD is that the methodology wants to cover every change, no matter how internal, with some sort of external test.
Some changes are simply not testable, period.
No, you cannot always write a test which initially fails, and then passes when the change is made, and when this is the case. You should understand why that is, and not try.
In some cases when you can, yet still should not. If a whole module is rewritten such that the new version satisfies all of the public contracts with the rest of the code, then only those contracts need to be retested; we don't need new tests targeting internals.
It's because the old version wasn't targeted by such tests in the first place that it can be rewritten without upheaval.
Bullshit. Even if you say "period", it doesn't make a bullshit true.
I think TDD is the best way to develop (yet). Obviously tests are code, and if you write crappy highly-coupled tests you will end up with only much more messy code. This is a clear example of bad testing. The greatest advantage of TDD is in design, everything should be modular and easy to unit test, so you could:
- reproduce bug and verify your bugfix in matter of ms with proper unit test
- understand what code does
- change and refactor code whenever you want
You can tell from what is written that they are not following TDD. Redesign that codebase in an easy and clean to test design would require an exponential effort and time compared to have it done step by step, but it would be worth it
A unit test is the least useful kind of test. It requires your design to be "easy to unit test" instead of simple, and if you change something and have to rewrite the test you might miss some logic in both pieces.
Plus the tests never break on their own because they're modular, and each time you run a test that was obviously going to pass, you've wasted your time.
As long as you have code coverage, better to have lots of asserts and real-world integration tests.
Integration tests are much slower usually, and you are testing tons of things at the same time. Something breaks (like in that example) and you have no idea of what and why went wrong.
If you unit test properly you are unit testing the business logic, that you have to properly divide and write in a modular fashion. If you want to test a more complex scenario, just add initial conditions or behaviors. If you can't do that or don't know how to do that, then you don't know what your code is doing or your code is bad designed. And that may be the case we read above.
Tests rarely break because they help you not breaking the code and functionalities, and they are so fast and efficient on making you realizing that that you don't feel the pain of it.
I can't imagine any example where "easy to unit test" != simple
1 reply →
Bullshit. I have found many bugs writing unit tests to a legacy code (afterwards). I have cleaned up incredibly messy spaghetti code by writing unit tests (which forced me to clean it up). I have made the code understandable by writing unit tests (both by means of "test as a documentation" and by means of "clean the code so that it is testable, make it modular and made of small units").
And btw, "easy to unit test" actually leads to "simple". That's the point of TDD.
In my opinion, the only thing that is valuable about unit tests is more appropriately captured in form of function, class and module contracts (as in "design by contract"). Unfortunately very few languages are adopting DbC.
Functional tests now, that's another matter. But a lot of TDD dogmatism is centered on unit tests specifically. And that results in a lot of code being written that doesn't actually contribute to the product, and that is there solely that you can chop up the product into tiny bits and unit test them separately. Then on the test side you have tons of mocks etc. I've seen several codebases where test code far exceeded the actual product code in complexity - and that's not a healthy state of affairs.
In more recent times I've seen some growth in interest around contract testing. Unit tests are immensely more useful when paired with contract tests, but unfortunately without them they tend to be more of a hassle. At its essence integrations are a form of a contract, but those suffer their own problems. In rspec you have 'instance_double' which is a form of a contract test as well, but not really sufficient for proper testing IMO. The current state from what I've seen is a little lackluster, but I wouldn't be surprised to see a growth in contract testing libraries for a variety of languages popping up.
I had some tests on my codebase, but eventually only documentation and integration tests remained.
So let's look at a simplified example.
https://bitbucket.org/iopq/fizzbuzz-in-rust
My tests are in the test folder. They are actually superfluous since integration tests test for the same thing.
I cannot break up the program in a way that would unit test a smaller piece of it in more detail. They only tests I can add would be to test the command line driver
For a single person and their one-person code base, you can certainly get away without unit tests.
This is especially if your "integration tests" are testing the same component, and not actually integrating with numerous other components being developed by different teams - or, if the system is so small it can run on a single workstation.
Working in teams on larger systems, the situation is different. Part of the point of unit tests is the "shift left" which allows problems to be discovered early, ideally before code leaves a developer's machine. It reduces the time until bugs are discovered significantly, and reduces the impact of one dev's bugs on other devs on the team.
TDD is yet another in a long line of "methodologies" that don't work. Tests are not a bad thing of course. The problem comes when you turn testing into an ideology and try to use it as a magic fix for all your problems. Same goes for "agile," etc.
Programming is a craft. Good programmers write good code. Bad programmers write bad code. No methodology will make bad programmers write good code, but bureaucratic bullshit can and will prevent good programmers from working at their best. The only way to improve the output of a bad programmer is to mentor them and let them gain experience.
The reality of working in teams at most companies is that there are going to be mediocre programmers, and even bad programmers, on the team. Many of the practices you refer to as bureaucratic bullshit are actually designed to protect programmers from the mistakes of other programmers.
Of course, this does require that the process itself has been set up with care, thought, and understanding of what's being achieved.
I'm probably not the best to speak on the topic as I don't use TDD (nor have I), but I think the idea is good if maybe a bit unorthodox: leveraging tests to define examples of inputs/outputs and setting "guards" to make sure the result of your code is as you expected via the tests.
I'm not keen on the "cult" of it, but if expectations of what the output should look like are available from the onset, it would appear to be of some benefit, at least.
What about TDD requires not understanding the code?
I'm confused by your comment. Your premise is that TDD should die, and your support is comparing it to a "great tool". Should TDD really die, or should people just stop treating things as a silver bullet? I personally love TDD, it helps me reason about my interfaces and reduces some of the more cumbersome parts of development. I don't expect everyone to use TDD and I don't use it all the time. Similarly I'd never tell someone debuggers should die and they should never use a debugger if thats something that would help them do their job.
The thing is, when I spend a lot of time thinking about how to make my program type-safe all of my unit tests become either useless or no-ops
Integration tests easily survive refactoring, on the other hand
Unit tests are a side effect of TDD, they don't have to be the goal. I'd find value out of TDD even if I deleted all of my tests after. It sounds like your problems are around unit tests, and that is neither something required to TDD nor is it something limited to just TDD.
The problem with integration tests is they are slow and grow exponentially. If they aren't growing exponentially then there's probably large chunks of untested code. Unit tests suffer their own problems, like you said they can be useless because of a reliance on mocking, they can also be brittle and break everywhere with small changes.
Ultimately any good suite of tests needs some of both. Unit tests to avoid exponential branching of your integration tests, and integration tests to catch errors related to how your units of code interact. I've experienced plenty of bad test suites, many of them are because of poorly written unit tests, but its often the poorly written integration tests that cause problems as well. As with most things, its all about a healthy balance.
4 replies →
TDD should only drive the public interface of your "module", if your testing your internals your doing it wrong. It will hinder refactoring rather than help.