Show HN: Formalizing Principia Mathematica using Lean

17 hours ago (github.com)

This project aims to formalize the first volume of Prof. Bertrand Russell’s Principia Mathematica using the Lean theorem prover. Throughout the formalization, I tried to rigorously follow Prof. Russell’s proof, with no or little added statements from my side, which were only necessary for the formalization but not the logical argument. Should you notice any inaccuracy (even if it does not necessarily falsify the proof), please let me know as I would like to proceed with the same spirit of rigour. Before starting this project, I had already found Prof. Elkind’s formalization of the Principia using Rocq (formerly Coq), which is much mature work than this one. However, I still thought it would be fun to do it using Lean4.

https://ndrwnaguib.com/principia/

https://github.com/ndrwnaguib/principia

What is the real difference between rocq vs lean? Alternatively, what is your motivation to do this in lean as compared to playing around with the rocq one if it exists?

I recently completed the natural number lean game and found it pretty fun, and would like to learn more about the differences between the two. Thanks!

  • I don’t know about their motivation, but I would say mine is that Lean is a real programming language. Coq is not really meant for “prosaic” programming, more’s the pity.

    Lean is also a lot faster.

This is cool and I looked into this many years ago (using MetaMath).

Sorry if this is obvious in one of the links, but does there exist a high quality “OCR-ed” version of the original text?

It looks like you just have a few pages written. Is that right?

Which theorem are you trying to prove?

  • Yes; the goal is to finish the first volume. I am particularly looking forward to formalizing the well-known 1+1 proof.

    • My understanding is the first bit follows first order logic fairly close but then diverges as Russel builds different classes of sets etc, do you have line of sight of how it’s going to translate?

> Although the Principia is thought to be “a monumental failure”, as said by Prof. Freeman Dyson

I'd like some elaboration on that. I failed to find a source.

  • Principia was written during the naive Logicist era of philosophy of mathematics that couldn't foresee serious foundational decidability issues in logic like Godel's incompleteness theorems, or the Halting Problem. Formalism/Platonism and Constructivism are two streams that came out of Logicism as a way to fix logical issues, and they're (very roughly speaking) the philosophical basis of classical mathematics and constructive mathematics today.

    The way formalists (mainstream mathematical community) dealt with the foundational issues was to study them very closely and precisely so that they can ignore it as much as possible. The philosophical justification is that even though a statement P is undecidable, ultimately speaking, within the universe of mathematical truth, it's either true or false and nothing else, even though we may not be able to construct a proof of either.

    Constructivists on the other hand took the opposite approach, they equated mathematical truth with provability, therefore undecidable statements P are such that they're neither true nor false, constructively. This means Aristotle's law of excluded middle (for any statement P, P or (not P)) no longer holds and therefore constructivists had to rebuild mathematics from a different logical basis.

    The issue with Principia is it doesn't know how to deal with issues like this, so the way it lays out mathematics no longer makes total sense, and its goals (mathematical program) are found to be impossible.

    Note: this is an extreme oversimplification. I recommend Stanford Encyclopedia of Philosophy for a more detailed overview. E.g. https://plato.stanford.edu/entries/hilbert-program/

    • Nobody argues about the result of an addition because the computation is mechanistically verifiable. Same with statements that are properly formalized in logic. The goal was to have the same for all of mathematics. So incompleteness is not a problem per se -- even if it shook people so much at the time (because proof theory always work within a given system). Incompleteness is the battery ram that is used to break the walls of common sense.

      If incompleteness isn't the killer of the Hilbert program, what is? The axiom of choice and the continuum hypothesis. Both lack any form of naturalness that would prevent any philosophical arguing. Worse, not accepting them also do. There is such a wealth of intuitionistically absurd results implied by these systems -- most famously, there is the joke that “The axiom of choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?”, when these 3 statements are _logically_ equivalent. So, we're back to a mathematical form of epistemological anarchism; there is no universal axiomatic basis for doing mathematics; any justification for the use of one has to be found externally to mathematics.

  • https://www.youtube.com/watch?v=9RD5D4swZfk - Possibly this.

    • Thanks. It appears, however, that Dyson considers the whole approach a failure (referring to Gödel as a demolisher of it). So while he is saying it about a book, ironically, it seems hardly applicable in this context anymore. Because with this reasoning, any program in Lean (and the Lean programming language itself) should be seen as "a monumental failure".

      5 replies →