← Back to context

Comment by 8n4vidtmkvmk

5 months ago

It does suck when those large scale integration tests fail but sometimes that's the only real way to test something. E.g. I have to call a service owned by another team. It has a schema and documentation so I can mock out what I think it will return, but how will I truly know the API is going to do what it says or what I think it says without actually calling the API?

> I truly know the API is going to do what it says or what I think it says without actually calling the API?

what if the API changes all of sudden in production? what about cases where api stays the same but content of response is all wrong? how do tests protect you from that?

edit: they are not hypothetical scenarios. wrong responses are way more common than schema breaking. tooling upsteam is often pretty good at catching schema breakages.

wrong responses often cause way more havoc than schema breakages because you get an alert for schema failures in app anyways.

  • Tests can't catch everything; it's a question of cost/benefit, and stopping when the diminishing returns provided by further tests (or other QA work) isn't enough to justify the cost of further investment in them (including the opportunity cost of spending our time improving QA elsewhere).

    For your example, the best place to invest would be in that API's own test suite (e.g. sending its devs examples of usage that we rely on); but of course we can't rely on others to make our lives easier. Contracts can help with that, to make the API developers responsible for following some particular change notification process.

    Still, such situations are hypothetical; whereas the sorts of integration tests that the parent is describing are useful to avoid our deployments from immediately blowing up.

  • That's exactly the point, isn't it? If the schema or response format changes, we want to catch that quickly. Canary, staging, or prod, take your pick but we need to call the API and run some assertions against that to make sure it's good.

Tbf that’s what post-deployment verification tests sre ideal for, instead of as integration/e2e tests blocking your merges/deployments

  • That's fine. If they're too slow, run them post deployment. But do run them at some point so you can at least catch it quickly without waiting for user complaints.