Comment by fg137

6 days ago

I have seen, many times, code that has lots of tests but don't work.

Why?

Some of the patterns that I saw:

* The code is only called from tests but never called in production

* Tests are not testing the actual application logic, or the logic that matters. In some cases, the tests have nothing to do with the application code at all, because it does not even run any application code.

* Tests repeat the same logic as in application (tautology). All the time.

* Application code is actually incorrect. But tests just end up using the wrong expected value to make tests pass, disregarding what should happen.

That's using the latest models.

To make things better, apparently people never bothered to go through the manual workflow at least once to verify the behavior.

Good luck just relying on tests.

I think you and I don't share what is a "test". Are you thinking about unit tests? I'm thinking about unit tests, smoke tests, integration tests, e2e tests, functional tests, manual QA tests and probably even "the-product-works-as-expected-as-I-can-see-from-the-amazon-reviews-of-our-clients tests".

I agree with your point of view in general, but "having tests" doesn't mean "having great tests". If I rewrite my code and give the binary to our clients and they don't see any difference or bug, well, that means the rewrite passed the ultimate test. In fact, the percentage of our clients that care about implementation details (such as PL) is precisely 0%.

  • I'm not interested in debating what "test" means. There is a standard definition in the software industry.

    • Ok with that. So yes, I stand by my stance: looking at test is enough. We can debate whether when tests can be considered complete enough.