Comment by amavect
1 day ago
In constructive logic, a proof of "A or B" consists of a pair (T,P). If T equals 0, then P proves A. If T equals 1, then P proves B. This directly corresponds to tagged union data types in programming. A "Float or Int" consists of a pair (Tag, Union). If Tag equals 0, then Union stores a Float. If Tag equals 1, then Union stores an Int.
In classical logic, a proof of "A or not A" requires nothing, a proof out of thin air.
Obviously, we want to stick with useful data structures, so we use constructive logic for programming.
> Obviously, we want to stick with useful data structures, so we use constructive logic for programming.
I don't know who "we" are, but most proofs of algorithm correctness use classical logic.
Also, there's nothing "obvious" about what you said unless you want proof objects, and why you'd want that is far from obvious in itself.
Well, to translate my words to your liking: "In my opinion, everyone already uses a sort of constructive logic for programming."
I challenge you on "most proofs of algorithm correctness use classical logic". That means double negation elimination, or excluded middle. I bet most proofs don't use those. Give examples.
Oh, if you mean that most algorithm correctness proofs are finitary and therefore don't need to explicitly rely on the excluded middle, that may well be the case, but they certainly don't try to avoid it either. Look at any algorithm paper with a proof of correctness and see how many of them explicitly limit themselves to constructive logic. My point isn't that most algorithm/program proofs need the excluded middle, it's that they don't benefit from not having it, either.
7 replies →
Proofs of safety are proving a negative: they're all about what an algorithm won't do. So constructivism is irrelevant to those, because the algorithm has provided all the constructive content already! Proofs of liveness/termination are the interesting case.
You might also add designing an algorithm to begin with, or porting it from a less restrictive to a more restrictive model of computation, as kinds of proofs in CS that are closely aligned to what we'd call constructive.
The difference only becomes evident when proving liveness/termination (since if your algorithm terminates successfully it has to construct something, and it only has to be proven that it's not incorrect) and then it turns out that these proofs do use something quite aligned to constructive logic.
... and also to classical logic. Liveness proofs typically require finding a variant that converges to some terminal value, and that's just as easy to do in classical logic as in constructive logic.
I've been using formal methods for years now and have yet to see where constructive logic makes things easier (I'm not saying it necessarily makes things harder, either).
You aren’t giving any justification why proofs should necessarily map to data structures.
Not necessarily, I only argue for utility. You can find better justification in the Curry-Howard correspondence.
How have you used the Curry Howard correspondence to make proving the correctness of non-trivial algorithms easier (than, say, Isabelle/HOL or TLA+ proofs)?
3 replies →