Comment by goku12
5 months ago
Not necessarily. For one, the local dev environment may be different or less pristine than what's encountered in the CI. I use bubblewrap (the sandboxing engine behind flatpak) sometimes to isolate the dev environment from the base system. Secondly, CI often does a lot more than what's possible on the local system. For example, it may run a lot more tests than what's practical on a local system. Or the upstream Repo may have code that you don't have in your local repo yet.
Besides all that, this is not at all what the author and your parent commenter is discussing. They are saying that the practice of triggering and running CI jobs entirely locally should be more common, rather than having to rely on a server. We do have CI runners that work locally. But the CI job management is still done largely from servers.
> For example, it may run a lot more tests than what's practical in a local system.
yes this is what i was taking about. If there are a lots of tests that are not practical to run locally then they are bad tests no matter how useful one might think they are. only good tests are the ones that run fast. It is also a sign that code itself is bad that you are forced to write tests that interact with outside world.
For example, you can extract logic into a presention layer and write unit test for that instead of mixing ui and business logic and writing browser tests for it. there are also well known patterns for this like 'model view presenter'.
I would rather put my effort into this than trying to figure out how to run tests that launch databases, browsers, call apis , start containers ect. Everywhere i've seen these kind of tests they've contributed to "it sucks to work on this code" feeling, bad vibes is the worst thing that can happen to code
It does suck when those large scale integration tests fail but sometimes that's the only real way to test something. E.g. I have to call a service owned by another team. It has a schema and documentation so I can mock out what I think it will return, but how will I truly know the API is going to do what it says or what I think it says without actually calling the API?
> I truly know the API is going to do what it says or what I think it says without actually calling the API?
what if the API changes all of sudden in production? what about cases where api stays the same but content of response is all wrong? how do tests protect you from that?
edit: they are not hypothetical scenarios. wrong responses are way more common than schema breaking. tooling upsteam is often pretty good at catching schema breakages.
wrong responses often cause way more havoc than schema breakages because you get an alert for schema failures in app anyways.
2 replies →
Tbf that’s what post-deployment verification tests sre ideal for, instead of as integration/e2e tests blocking your merges/deployments
1 reply →
It is a tradeoff, e.g. running tests with a real database or other supporting service and taking longer vs. mocking things and having a test environment that is less like reality.
https://testcontainers.com/ is not quite the solution to all your problems, but it makes working with real databases and supporting services pretty much as easy as mocking them would be.
I'd really recommend against mocking dependencies for most tests though. Don't mock what you don't own, do make sure you test each abstraction layer appropriately.
do you really need to test postgres api in your own code?