Comment by somat
12 hours ago
Whenever I read an article about formal verification systems there is always that nagging thought in the back of my head. Why can you trust your formal verification system to be bug free but you can't trust the program. should not the chance of bugs be about equal in both of them?
You have a program that does something and you write another program to prove it. What assurance do you have that one program has fewer bugs then the other? Why can one program have bugs but the other can't? How do you prove that you are proving the right thing? It all sort of ties into Heisenberg's uncertainty theorem. A system cannot be fully described from within that system.
Don't get me wrong, I think these are great systems doing great work. But I always feel there is something missing in the narrative.
I think a more practical view is that a program is already a sort of proof. there is a something to be solved and the program provides a mechanism to prove it. but this proof may be and probably is incorrect, as bugs are fixed it gets more and more correct. A powerful but time consuming tool to try and force correctness is to build the machine twice using different mechanisms. Then mismatched output indicates something is wrong with one of them. and your job as an engineer is to figure out which one. This is what formal verification brings to the table. The second mechanism.
> It all sort of ties into Heisenberg's uncertainty theorem. A system cannot be fully described from within that system.
Surely you are talking about Godel incompleteness, not Heisenberg's uncertainty principle; in which case they're actually not the same system - the verification/proof language is more like a metalanguage taking the implementation language as its object.
(Godel's observation for mathematics was just that for formal number systems of sufficient power, you can embed that metalanguage into the formal number system itself.)
> Whenever I read an article about formal verification systems there is always that nagging thought in the back of my head. Why can you trust your formal verification system to be bug free but you can't trust the program. should not the chance of bugs be about equal in both of them?
A bug in the formal verification tool could be potentially noticed by any user of that formal verification tool. (And indirectly by any of their users noticing a bug about which they say "huh, I thought the tool told me that was impossible.")
A bug in your program can only be potentially noticed by you and your users.
There are also entirely categories of bugs that may not be relevant. For instance, if I'm trying to prove correctness of a distributed concurrent system and I use a model+verifier that verifies things in a sequential, non-concurrent way, then I don't have to worry about the prover having all the same sort of race conditions as my actual code.
But yeah, if you try to write your own prover to prove your own software, you could screw up either. But that's not what is being discussed here.
Formal verification is just an extra step up from the static types that you might have in a language such as Rust.
Common static types prove many of the important properties of a program. If I declare a variable of type String then the type checker ensures that it is indeed a String. That's a proof. Formal verification takes this further and proves other properties such as the string is never empty.
Common static types are very effective. Many users of Rust or Haskell will claim that if a program compiles then it usually works correctly when they go to run it.
However there is a non-linear relationship between probability of program correctness and the amount of types required to achieve it. Being almost certain requires vastly more types than just being confident.
That's the real issue with formal verification, being 75% sure and having less code is better than being 99% sure in most situations, though if I were programming a radiotherapy machine I might think differently.
I think formal verification brings a bit more to the table. The logical properties are not just a second implementation. They can be radically simpler. I think quantifiers are doing a lot of work here (forall/exists). They are not usable directly in regular code. For example, you can specify that a shortest path algorithm must satisfy:
That's much simpler than any actual shortest path algorithm.
AWS has said that formal verification enables their engineers to implement aggressive performance optimizations on complex algorithms without the fear of introducing subtle bugs or breaking system correctness. It helped double the performance of the IAM ACL evaluation code
The chances of significant bugs in lean which lead to false answers to real problems are extremely small (this bug still just caused a crash, but is still bad). Many, many people try very hard to break Lean, and think about how proofs work, and fail. Is it foolproof? No. It might have flaws, it might be logic itself is inconsistent.
I often think of the ‘news level’ of a bug. A bug in most code wouldn’t be news. A bug which caused lean to claim a real proof someone cared about was true, when it wasn’t, would in the proof community the biggest news in a decade.
> should not the chance of bugs be about equal in both of them?
Even if it is, the verification is still very useful. The verifier is going to run for a few minutes and probably not hit many edge cases. The chance it actually hits a bug is low, and the chance a bug makes it wrongly accept your program is a lot lower. Especially if it has to output a proof at the end. Meanwhile it's scrutinizing every single edge case your program has.
> should not the chance of bugs be about equal in both of them?
Why?
Are you saying that all the programs ever written have the exact same chance of bugs? A hello world is as buggy as a vibe-coded Chromium clone?
If you accept the premise that different programs have different chances to have bugs, then I'd say:
1. Simpler programs are likely less buggy.
2. Programs used by more people are likely less buggy.
3. Programs maintained by experts who care about correctness are likely less buggy.
4. Programs where the stakes are higher are likely less buggy.
All things considered, I think it's fair to say Lean is likely less buggy then a random program written by me at weekend.
> Heisenberg's uncertainty theorem
It has nothing to do with the uncertainty principle. If you think otherwise, it means your understanding of uncertainty principle comes from sci-fi :)
A program is a hard proof of existence.
It runs (maybe crashes), therefore … it exists.
The tension between spec bugs vs. implementation bugs is real. But i will take a bug in a situation where the implementation has been verified any day.
Working over what we really want is problem solving in the problem domain.
As apposed to going into the never ending implementation not-what-we-were trying to solve weeds.