← Back to context

Comment by kragen

1 day ago

I only skimmed this paper, but it doesn't mention floating point (it's only modeling the FNN as a function on reals), and I don't think you can extract a working FNN implementation from an SMT-LIB or Z3 problem statement, so I think you have to take on faith the correspondence between the implementation you're running and the thing you proved correct.

But that's the actual problem we're trying to solve; nobody really doubts Kalman's proof of his filter's optimality.

So this paper is not a relevant answer to the question, "how would you prove an implementation of a Kalman filter is correct?"

In most cases the data a kalman filter is working on has some precision which is much lower than the available precision in the floating format you are using. The problem is inherently a statistical one, since the expected precision depends on the statistics of your data source.

So you would probably adopt some conservative approach in which you showed that the worst case floating point rounding error is << some quantile of error due to the data.

But, I think specialised tools are more commonly used than general process. Eg, see https://github.com/arpra-project/arpra

  • That's a start, but you might be able to do better than that; for example, you ought to be able to show that the floating-point rounding error is unbiased.