Comment by d0mine
17 years ago
A data point for discussion "Static vs. dynamic typing: on using compiler to find bugs":
Static analysis has not proven to be helpful in finding bugs. We cannot call to mind a single problem in SQLite that was detected by static analysis that was not first seen by one of the other testing methods described above. On the other hand, we have on occasion introduced new bugs in our efforts to get SQLite to compile without warnings. </quote>
It's worth noting that C can barely be considered static typing. If SQLite were written in, say, Haskell, I doubt it would need millions of lines of test code, just to make sure it frees everything it mallocs.
I'm pretty ignorant about functional languages, but how does a language like Haskell handle allocation failures? I'm curious how well it would work in the embedded space.
I assume it doesn't handle it as well as custom hand-tuned code.
I've also never used SQLite on an embedded system.
2 replies →
Most of the 'static analysis' already takes place during coding, when your (incremental) compiler screams DOES NOT COMPILE. I doubt the developers of SqlLite haven't seen that sort of message quite often (Eclipse definitely shows me the red curly line often enough).
This is really a very small subset of static analysis- GCC compiler warnings. I am sure the developers appreciate when their program fails to compile because the compiler statically notices a bug that would cause a runtime error in a more dynamic language. They might be able to productively use more sophisticated static analyses tools.
That's not to say that static analysis didn't find any problems at all. Just that the other, more extensive, methods found them first. Which speaks well for their tests. If anything, I would say that this speaks more for good static analysis, since they clearly put an enormous amount of effort into developing these tests. Most commercial software can't justify anything close to that. (kind of ironic, really)
Hypothetically, what would it take for you to believe that static analysis doesn't work? Because, as lame as C's typechecking is, this remark sounds to me a lot like the "no true Scotsman" fallacy.
The sort of testing that the SQLite developers do is, while depressingly uncommon, not that unreasonable for any sort of software been around for a few years. If your bug fixing methodology is "write test to reproduce bug, then fix; repeat" you end up with a pile of test cases as a result.
This article tells me two things: (1) static analysis is not complete, and (2) static analysis is not useless. (1) should be obvious, while (2) is implied but not stated. If you care about testing a lot, then it's perfectly reasonable to dispense with static analysis altogether and just write the tests.
This is where it goes into the commercial software bit: while that's a great regression test policy, I don't believe it's at all common. There are places where testing isn't really done. Probably most places. If you have no tests or bad tests, then static analysis is better than nothing.
Wow, astounding ignorance/ungratefulness on static analysis. Coverity (a static analysis tool) found many defects in the code (17 fixed). The number of defects per KLOC for SQLite is not that great compared with other projects. KDE is actually much better: http://scan.coverity.com/rungAll.html
Engler (whose students founded Coverity) et al. had an OSDI "best paper" last year on using static analysis (with constraint solvers etc.) to automatically generate test cases, which actually beats hand written test cases for glibc with years of development.
http://www.stanford.edu/~engler/klee-osdi-2008.pdf