Comment by CuriouslyC
1 day ago
Pre-agents, 100% agree. Now, it's not a bad idea, the cost to do it isn't terrible, though there's diminishing returns as you get >90-95%.
1 day ago
Pre-agents, 100% agree. Now, it's not a bad idea, the cost to do it isn't terrible, though there's diminishing returns as you get >90-95%.
LLMs don't make bad tests any less harmful. Nor they write good tests for the stuff people mostly can't write good tests for.
Okay, but is aiming for 100% coverage really why the bad tests are bad?
Aiming for 100% coverage is almost certain to cause bad tests, yes.
But not all bad tests come from a goal of 100% coverage.
In most cases I have seen bad tests, yes.
You just end up writing needless tests trying to trigger or mock error state from a 3rd party library that's never actually returning error, just the lib had a rule of "every call returns error code" in case something changes and it's needed.
The problem is that it is natural to have code that is unreachable. Maybe you are trying to defend against potential cases that may be there in the future (e.g., things that are yet implemented), or algorithms written in a general way but are only used in a specific way. 100% test coverage requires removing these, and can hurt future development.
It doesn't require removing them if you think you'll need them. It just requires writing tests for those edge cases so you have confidence that the code will work correctly if/when those branches do eventually run.
I don't think anyone wants production code paths that have never been tried, right?