Comment by mullr
17 years ago
That's not to say that static analysis didn't find any problems at all. Just that the other, more extensive, methods found them first. Which speaks well for their tests. If anything, I would say that this speaks more for good static analysis, since they clearly put an enormous amount of effort into developing these tests. Most commercial software can't justify anything close to that. (kind of ironic, really)
Hypothetically, what would it take for you to believe that static analysis doesn't work? Because, as lame as C's typechecking is, this remark sounds to me a lot like the "no true Scotsman" fallacy.
The sort of testing that the SQLite developers do is, while depressingly uncommon, not that unreasonable for any sort of software been around for a few years. If your bug fixing methodology is "write test to reproduce bug, then fix; repeat" you end up with a pile of test cases as a result.
This article tells me two things: (1) static analysis is not complete, and (2) static analysis is not useless. (1) should be obvious, while (2) is implied but not stated. If you care about testing a lot, then it's perfectly reasonable to dispense with static analysis altogether and just write the tests.
This is where it goes into the commercial software bit: while that's a great regression test policy, I don't believe it's at all common. There are places where testing isn't really done. Probably most places. If you have no tests or bad tests, then static analysis is better than nothing.