← Back to context

Comment by apwell23

5 months ago

> For example, it may run a lot more tests than what's practical in a local system.

yes this is what i was taking about. If there are a lots of tests that are not practical to run locally then they are bad tests no matter how useful one might think they are. only good tests are the ones that run fast. It is also a sign that code itself is bad that you are forced to write tests that interact with outside world.

For example, you can extract logic into a presention layer and write unit test for that instead of mixing ui and business logic and writing browser tests for it. there are also well known patterns for this like 'model view presenter'.

I would rather put my effort into this than trying to figure out how to run tests that launch databases, browsers, call apis , start containers ect. Everywhere i've seen these kind of tests they've contributed to "it sucks to work on this code" feeling, bad vibes is the worst thing that can happen to code

It does suck when those large scale integration tests fail but sometimes that's the only real way to test something. E.g. I have to call a service owned by another team. It has a schema and documentation so I can mock out what I think it will return, but how will I truly know the API is going to do what it says or what I think it says without actually calling the API?

  • > I truly know the API is going to do what it says or what I think it says without actually calling the API?

    what if the API changes all of sudden in production? what about cases where api stays the same but content of response is all wrong? how do tests protect you from that?

    edit: they are not hypothetical scenarios. wrong responses are way more common than schema breaking. tooling upsteam is often pretty good at catching schema breakages.

    wrong responses often cause way more havoc than schema breakages because you get an alert for schema failures in app anyways.

    • Tests can't catch everything; it's a question of cost/benefit, and stopping when the diminishing returns provided by further tests (or other QA work) isn't enough to justify the cost of further investment in them (including the opportunity cost of spending our time improving QA elsewhere).

      For your example, the best place to invest would be in that API's own test suite (e.g. sending its devs examples of usage that we rely on); but of course we can't rely on others to make our lives easier. Contracts can help with that, to make the API developers responsible for following some particular change notification process.

      Still, such situations are hypothetical; whereas the sorts of integration tests that the parent is describing are useful to avoid our deployments from immediately blowing up.

    • That's exactly the point, isn't it? If the schema or response format changes, we want to catch that quickly. Canary, staging, or prod, take your pick but we need to call the API and run some assertions against that to make sure it's good.

  • Tbf that’s what post-deployment verification tests sre ideal for, instead of as integration/e2e tests blocking your merges/deployments

    • That's fine. If they're too slow, run them post deployment. But do run them at some point so you can at least catch it quickly without waiting for user complaints.

It is a tradeoff, e.g. running tests with a real database or other supporting service and taking longer vs. mocking things and having a test environment that is less like reality.

  • https://testcontainers.com/ is not quite the solution to all your problems, but it makes working with real databases and supporting services pretty much as easy as mocking them would be.

    I'd really recommend against mocking dependencies for most tests though. Don't mock what you don't own, do make sure you test each abstraction layer appropriately.