← Back to context

Comment by petesergeant

2 days ago

I’m struggling to wrap my head around abstraction as non-determinism here. Yes, an implementation could change, but then you’ve loaded other code?

It is deterministic, but if it's absurdly hard to prove, you just say it isn't deterministic.

Imagine a program that uses JSON and depends on the order of keys in an object. If you input the same object, but with different key order, you'll get different results.

The program is deterministic, but it looks non-deterministic to a naïve observer.

That's just a very simple example, in a more complex system there could be all kinds of uncommon behaviours that are less simple to trace. So when modeling the system, they just box it as "there be dragons" and move on.

The example is really bad for this because it doesn't say the result is non-deterministic, just complex and treated as a black box which only makes the result hard to reason about when abstracted rather than truly non-deterministic.

A common example from my domain is making floating point math deterministic. This is fairly straightforward for a single operating system/piece of hardware combination but as soon as you need cross-platform determinism is full of gotchas. In particular because the abstraction in this case is often the programming language, the specific compiler used (sometimes even version) and standard library. This also gets as subtle as not being able to use certain instructions like fused multiply-add and SIMD intrinsics where the implementation can differ between pieces of hardware.

The article wasn't bad, but the examples felt a bit thin; almost like the author was reaching a bit to hit that magic number of 5.

And you're definitely not wrong on the abstract subsection. I mean, essentially all he says is that "there's a mechanism in our system which is so complicated/convoluted that we decided not to model it with TLA, so instead we chose to simply call it non-deterministic, and therefore it is now abstract non-determinism." Hmm. If we're truly being abstract, that's not really all that different from the PRNG and user input examples, except, you know, it's actually still completely deterministic code.

Also, I have to say- 5 different types of non-determinism, and not one mention of quantum systems?

I took issue with #4 too. If your code consumes API endpoints that are non-deterministic, that strikes me as something you should take up with your vendor. An error condition on a network call is not non-determinism. It's just reality from time to time.

  • The author is writing on the context of formal verification. From that point of view, modelling an error condition on a network call as non-deterministic and just lumping potential bugs in the remote API into that as well since you don't control it (i.e. it is not part of the system you are trying to verify) just makes sense so you can go forward with verifying that your code will do the right thing no matter what happens.

    • Understood. It was hard to tell if he was only writing it from the standpoint of FM or if he was speaking more generally.