← Back to context

Comment by zozbot234

11 days ago

Proof carrying code is efficient at runtime; the verifier only ever has to run once to verify the binary, which in principle can simply be done at install. Even the verification itself can be fast enough because it's only checking the soundness of existing proofs, not having to generate new ones from scratch. Where there are genuine needs to "make the proof system more expressive", this can be done by adding trusted axioms which will generally be simple enough to manually check; the verifier itself need not become more complex.

We've seen a lot of complexity happen with Java but that was generally in the standard library facilities that are bundled with the language and runtime, not really the basic JVM type checking pass. Proof carrying code is closer to the latter.

I agree that the verifier performance is not important to runtime performance (though there have been some changes to the JVM in particular to speed up class loading), but the expressivity of the safety proofs it can check is, because safety properties that cannot be proven at load time must be checked at runtime. Initially, and I think still, the JVM's downcast on loading an object reference from a generic container like ArrayList<Price> is an example: javac has proven statically that it is safe, but the JVM bytecode cannot express that. Pointer nullability in JVM languages like Kotlin is another example: Kotlin knows most object pointers can't be null, but the JVM still has to check at runtime.

There have been numerous holes discovered in various implementations of the basic JVM type checking, often after existing for many years.

  • The JVM byte code situation isn’t a great example because that was a series of deliberate design choices for lots of complex reasons. And, the JVM absolutely can guarantee memory safety at the bytecode level. It’s just working with a slightly more dynamic type system than Java source.

    What would happen if you tried to do PCC for InvisiCaps and FUGC is that it would ultimately constrain what optimizations are possible, because the optimizer would only be able to pick from the set of tricks that it could express a proof for within whatever proof system we picked

    Totally not the end of the world.

    Do I think this an interesting thing to actually do? I’m not sure. It’s certainly not the most interesting thing to do with Fil-C right now

    • Yes, I agree with you that a correct JVM can guarantee memory safety at the bytecode level, but what I meant to express was that many JVMs have had bugs that caused them to fail to do so, for essentially social reasons which I expect to cause problems with other PCC systems as well.

      Maybe you're right, and those problems are not inevitable; for example, if you could reduce the proofs to a tiny MetaMath-like kernel that wouldn't need constant "maintenance". As you say, that could move the compiler's optimizer out of the TCB — at least for the security properties Fil-C is enforcing, though the optimizer could still cause code to compute the wrong answers.

      That seems like something people would want if they were already living in a world where the state of computer security was much better than it is now.

  • I mean, since Gödel we pretty much have known that there could never be a system "without holes".

    • No, that is not correct. Gödel showed that some theorems are unprovable in a consistent formal axiomatic system; he did not show that no consistent formal axiomatic systems exist.

      5 replies →