IIRC this book unfortunately only proves correctness directly and not runtime. Its runtime proofs are based off an abstraction of the algorithm suitable for direct manipulation by proofs rather than the actual implementation in code.
Does anybody know of any languages that let you prove properties about the runtime of a function directly implemented in the language?
> Its runtime proofs are based off an abstraction of the algorithm suitable for direct manipulation by proofs rather than the actual implementation in code.
What is the difference? You are aware that the code is also only an abstraction, right?
The difference is, that proving something about an abstraction doesn't prove, that you made no mistakes when translating that abstraction into the actual code running, and therefore you have not proven anything of value about the actually running code.
Unfortunately proving anything about a concrete imperative implementation is orders of magnitude more complex than working with an abstraction, because you have to deal with pesky 'reality' and truly take care of every possible edge-case, so it only makes sense for the most critical applications. And sometimes there just isn't a framework to do even that, depending on your use case, and you'd have to sit down a PhD student for a while to build it. And even then you're still working with an abstraction of some kind, since you have to assume some kind of CPU architecture etc.
It really is more difficult to work with 'concrete implementations' to a degree that's fairly unintuitive if you haven't seen it first-hand.
I can't fathom how crazy it gets to model once you try to consider compilers, architectures, timings, temperatures, bit-flips & ECCs, cache misses, pseudo and "truly" random devices, threads, other processes, system load, I/O errors, networking.
To me it seems mandatory to work with some abstraction underneath that allows factoring a lot of different cases into a smaller set of possibilities that needs to be analysed.
It's also how we manage to think in a world where tiny little details do give you a likely insignificantly different world-state to think about.
Some algorithms such as binary search give an incorrect view of the overall cost. The search has a prerequisite of sorting. So, the assumption is, the data is sorted once, and searched several times, making the sorting cost insignificant.
What if the data is used for only a single lookup? For this case, actually a sequential search would have lower cost compared to sorting and binary search. Infact, sequential search may beat sorting and binary search for upto about 100 lookups. So I think it is important to consider overall cost.
It is not an "incorrect view of the overall cost".
Binary search talks about how long the search takes with this particular algorithm assuming the data is sorted. It does not talk at all about scenarios where the data is not sorted. In particular, it does not provide any views on costs without this assumption, let alone an incorrect one.
Yes, YOU need to consider when it is appropriate to use binary search. That is the case with all algorithms you will ever apply, and goes without saying.
But how the data got sorted is irrelevant to the speed of the algorithm: for example, you could use binary search as part of an algorithm to find the insertion point of a new element in an always sorted data structure, meaning that sorting the data is never necessary.
The overall journey matters. For example, for some flight journeys, the flight-time is only a fraction of the overall time taken by the journey, which could makes it faster if you use road or rail transport. Flight speed doesn't matter.
IIRC this book unfortunately only proves correctness directly and not runtime. Its runtime proofs are based off an abstraction of the algorithm suitable for direct manipulation by proofs rather than the actual implementation in code.
Does anybody know of any languages that let you prove properties about the runtime of a function directly implemented in the language?
Eiffel is the language you are looking for. It's pre-conditions and post-conditions prove the properties of the code that you actually are executing.
The book that does the equivalent but with the properties checked at runtime is: Object Structures (like data structures but with objects).
Here is the link:
https://openlibrary.org/works/OL2979696W/Object_structures?e...
> Its runtime proofs are based off an abstraction of the algorithm suitable for direct manipulation by proofs rather than the actual implementation in code.
What is the difference? You are aware that the code is also only an abstraction, right?
The difference is, that proving something about an abstraction doesn't prove, that you made no mistakes when translating that abstraction into the actual code running, and therefore you have not proven anything of value about the actually running code.
7 replies →
Unfortunately proving anything about a concrete imperative implementation is orders of magnitude more complex than working with an abstraction, because you have to deal with pesky 'reality' and truly take care of every possible edge-case, so it only makes sense for the most critical applications. And sometimes there just isn't a framework to do even that, depending on your use case, and you'd have to sit down a PhD student for a while to build it. And even then you're still working with an abstraction of some kind, since you have to assume some kind of CPU architecture etc.
It really is more difficult to work with 'concrete implementations' to a degree that's fairly unintuitive if you haven't seen it first-hand.
I can't fathom how crazy it gets to model once you try to consider compilers, architectures, timings, temperatures, bit-flips & ECCs, cache misses, pseudo and "truly" random devices, threads, other processes, system load, I/O errors, networking.
To me it seems mandatory to work with some abstraction underneath that allows factoring a lot of different cases into a smaller set of possibilities that needs to be analysed.
It's also how we manage to think in a world where tiny little details do give you a likely insignificantly different world-state to think about.
https://markushimmel.de/blog/my-first-verified-imperative-pr...
Lean
Verification of "runtimes" in the sense of GP is not mentioned at all in the article you linked.
Some algorithms such as binary search give an incorrect view of the overall cost. The search has a prerequisite of sorting. So, the assumption is, the data is sorted once, and searched several times, making the sorting cost insignificant.
What if the data is used for only a single lookup? For this case, actually a sequential search would have lower cost compared to sorting and binary search. Infact, sequential search may beat sorting and binary search for upto about 100 lookups. So I think it is important to consider overall cost.
It is not an "incorrect view of the overall cost".
Binary search talks about how long the search takes with this particular algorithm assuming the data is sorted. It does not talk at all about scenarios where the data is not sorted. In particular, it does not provide any views on costs without this assumption, let alone an incorrect one.
Yes, YOU need to consider when it is appropriate to use binary search. That is the case with all algorithms you will ever apply, and goes without saying.
But how the data got sorted is irrelevant to the speed of the algorithm: for example, you could use binary search as part of an algorithm to find the insertion point of a new element in an always sorted data structure, meaning that sorting the data is never necessary.
The overall journey matters. For example, for some flight journeys, the flight-time is only a fraction of the overall time taken by the journey, which could makes it faster if you use road or rail transport. Flight speed doesn't matter.
1 reply →