← Back to context

Comment by jaredcwhite

21 hours ago

100% test coverage, for most projects of modest size, is extremely bad advice.

Pre-agents, 100% agree. Now, it's not a bad idea, the cost to do it isn't terrible, though there's diminishing returns as you get >90-95%.

  • You just end up writing needless tests trying to trigger or mock error state from a 3rd party library that's never actually returning error, just the lib had a rule of "every call returns error code" in case something changes and it's needed.

  • The problem is that it is natural to have code that is unreachable. Maybe you are trying to defend against potential cases that may be there in the future (e.g., things that are yet implemented), or algorithms written in a general way but are only used in a specific way. 100% test coverage requires removing these, and can hurt future development.

    • It doesn't require removing them if you think you'll need them. It just requires writing tests for those edge cases so you have confidence that the code will work correctly if/when those branches do eventually run.

      I don't think anyone wants production code paths that have never been tried, right?

laziness? unprofessionalism? both? or something else?

  • You forgot difficult. How do you test a system call failure? How do you test a system call failure when the first N calls need to pass? Be careful how you answer, some answers technically fall into the "undefined behavior" category (if you are using C or C++).