By "asserting X" do you mean "checking whether X is true and crashing the program if not", like the assert macro in C or the assert statement in Python? No, that is almost never done, for three reasons:
• Crashing the program is often what you formally verified the program to prevent in the first place! A crashing program is what destroyed Ariane 5 on its maiden flight, for example. Crashing the program is often the worst possible outcome rather than an acceptable one.
• Many of the assumptions are not things that a program can check are true. Examples from the post include "nothing is concurrently modifying [variables]", "the compiler worked correctly, the hardware isn't faulty, and the OS doesn't mess with things," and, "unsafe [Rust] code does not have [a memory bug] either." None of these assumptions could be reliably verified by any conceivable test a program could make.
• Even when the program could check an assumption, it often isn't computationally feasible; for example, binary search of an array is only valid if the array is sorted, but checking that every time the binary search routine is invoked would take it from logarithmic time to linear time, typically an orders-of-magnitude slowdown that would defeat the purpose of using a binary search instead of a simpler sequential search. (I think Hillel tried to use this example in the article but accidentally wrote "binary sort" instead, which isn't a thing.)
When crashing the program is acceptable and correctness preconditions can be efficiently checked, postconditions usually can be too. In those cases, it's common to use either runtime checks or property-based testing instead of formal verification, which is harder.
This becomes an interesting conversation then. First of all, it could mean "checking whether X is true and logging an error" instead of exiting the program.
- But if you aren't comfortable crashing the program if the assumptions are violated, then what is your formal verification worth? Not much, because the formal verification only holds if the assumptions hold, and you are indicating you don't believe they will hold.
- True, some are infeasible to check. In that case, you could then check them weakly or indirectly. For example, check if the first two indices of the input array are not sorted. You could also check them infrequently. Better to partially check your assumptions than not check at all.
> "checking whether X is true and logging an error"
You now have some process or computation which is in an unknown state. Presumably it was important! There are lots of cases where runtime errors are fine and expected -- when processing input from the outside world, but if you started a chain of computation with good input and somehow ended up with bad input, then you have a bug! That is bad! Everything after that point can no longer be trusted, and there was some place where the code went wrong before that and made everything between there and where you caught the error invalid. And that has possibly poisoned your whole system.
That’s impractical. Take binary search and the assumption the list is sorted. Verifying the list is sorted would negate the point of binary search as you would be inspecting every item in the list.
There is a useful middle ground here. When picking the middle element, verify that it is indeed within the established bounds. This way, you'll still catch the sort order violations that matter without making the whole search linear.
ASSERTING the list is sorted as an assumption is significantly different form VERIFYING that the list is sorted before executing the search. Moreover, type systems can track that a list was previously sorted and maintained it's sorted status making the assumption reasonable to state.
What do you mean when you say "assert" and "verify"? In my head, given the context of this thread and the comment you're replying to, they can both only mean "add an `if not sorted then abort()`."
> Moreover, type systems can track that a list was previously sorted and maintained it's sorted status making the assumption reasonable to state.
This is true, but if you care about correct execution, you would need to re-verify that the list is sorted - bitflips in your DRAM or a buggy piece of code trampling random memory could have de-sorted the list. Then your formally verified application misbehaves even though nothing is wrong with it.
It's also possible to end up with a "sorted" list that isn't actually sorted if your comparison function is buggy, though hopefully you formally verified the comparison function and it's correct.
Only if you verify it for every search. If you haven't touched the list since the last search, the verification is still good. For some (not all) situations, you can verify the list at the start of the program, and never have to verify it again.
Add(x,y):
Assert( x >= 0 && y>= 0 )
z = x + y
Assert( z >= x && z >= y )
return z
There’s definitely smarter ways to do this, but in practice there is always some way to encode the properties you care about in ways that your assertions will be violated. If you can’t observe a violation, it’s not a violation https://en.wikipedia.org/wiki/Identity_of_indiscernibles
By "asserting X" do you mean "checking whether X is true and crashing the program if not", like the assert macro in C or the assert statement in Python? No, that is almost never done, for three reasons:
• Crashing the program is often what you formally verified the program to prevent in the first place! A crashing program is what destroyed Ariane 5 on its maiden flight, for example. Crashing the program is often the worst possible outcome rather than an acceptable one.
• Many of the assumptions are not things that a program can check are true. Examples from the post include "nothing is concurrently modifying [variables]", "the compiler worked correctly, the hardware isn't faulty, and the OS doesn't mess with things," and, "unsafe [Rust] code does not have [a memory bug] either." None of these assumptions could be reliably verified by any conceivable test a program could make.
• Even when the program could check an assumption, it often isn't computationally feasible; for example, binary search of an array is only valid if the array is sorted, but checking that every time the binary search routine is invoked would take it from logarithmic time to linear time, typically an orders-of-magnitude slowdown that would defeat the purpose of using a binary search instead of a simpler sequential search. (I think Hillel tried to use this example in the article but accidentally wrote "binary sort" instead, which isn't a thing.)
When crashing the program is acceptable and correctness preconditions can be efficiently checked, postconditions usually can be too. In those cases, it's common to use either runtime checks or property-based testing instead of formal verification, which is harder.
This becomes an interesting conversation then. First of all, it could mean "checking whether X is true and logging an error" instead of exiting the program.
- But if you aren't comfortable crashing the program if the assumptions are violated, then what is your formal verification worth? Not much, because the formal verification only holds if the assumptions hold, and you are indicating you don't believe they will hold.
- True, some are infeasible to check. In that case, you could then check them weakly or indirectly. For example, check if the first two indices of the input array are not sorted. You could also check them infrequently. Better to partially check your assumptions than not check at all.
You don’t assume the assertions, the verification shows they always hold!
1 reply →
> "checking whether X is true and logging an error"
You now have some process or computation which is in an unknown state. Presumably it was important! There are lots of cases where runtime errors are fine and expected -- when processing input from the outside world, but if you started a chain of computation with good input and somehow ended up with bad input, then you have a bug! That is bad! Everything after that point can no longer be trusted, and there was some place where the code went wrong before that and made everything between there and where you caught the error invalid. And that has possibly poisoned your whole system.
You didn't answer my question!
3 replies →
That’s impractical. Take binary search and the assumption the list is sorted. Verifying the list is sorted would negate the point of binary search as you would be inspecting every item in the list.
There is a useful middle ground here. When picking the middle element, verify that it is indeed within the established bounds. This way, you'll still catch the sort order violations that matter without making the whole search linear.
ASSERTING the list is sorted as an assumption is significantly different form VERIFYING that the list is sorted before executing the search. Moreover, type systems can track that a list was previously sorted and maintained it's sorted status making the assumption reasonable to state.
What do you mean when you say "assert" and "verify"? In my head, given the context of this thread and the comment you're replying to, they can both only mean "add an `if not sorted then abort()`."
But you make some sort of distinction here.
6 replies →
> Moreover, type systems can track that a list was previously sorted and maintained it's sorted status making the assumption reasonable to state.
This is true, but if you care about correct execution, you would need to re-verify that the list is sorted - bitflips in your DRAM or a buggy piece of code trampling random memory could have de-sorted the list. Then your formally verified application misbehaves even though nothing is wrong with it.
It's also possible to end up with a "sorted" list that isn't actually sorted if your comparison function is buggy, though hopefully you formally verified the comparison function and it's correct.
3 replies →
Only if you verify it for every search. If you haven't touched the list since the last search, the verification is still good. For some (not all) situations, you can verify the list at the start of the program, and never have to verify it again.
How would that look like if you accidentally assumed you have arbitrary large integers but in practice you have 64 bits?
There’s definitely smarter ways to do this, but in practice there is always some way to encode the properties you care about in ways that your assertions will be violated. If you can’t observe a violation, it’s not a violation https://en.wikipedia.org/wiki/Identity_of_indiscernibles
In some languages overflow is asserted as a can't happen and so the optimizer will remove your checks
7 replies →