Comment by lacker
2 years ago
100 years ago everyone was worrying about whether the foundations of mathematics were consistent. We figured out some details that we had been taking for granted, like the axiom of choice, how to prevent Russell's paradox, and the inevitable difference between "true" and "provable" (ie Godel's incompleteness). Now that part is pretty much solved, we are fairly confident that the foundations of mathematics are consistent.
IMHO the problem now is that the foundations of mathematics are impractical. Most mathematicians don't actually do mathematics that can be tied back to the foundational axioms. It seems like it should be possible, in theory, but it takes too much work. So most mathematicians prefer communicating their proofs in PDFs rather than in a more rigorous format.
> It seems like it should be possible, in theory, but it takes too much work.
The definition of a valid mathematical proof that I've heard mathematicians use is if it convinces other mathematicians. I think there's integrity in depth in mathematical proofs for lots of reasons, tying back to an axiomatic basis is a lot of extra work for likely no benefit.
On formal mathematics, almost no mathematicians pay attention to this afaik. It's something that lives in philosophy departments, not mathematics.
> On formal mathematics, almost no mathematicians pay attention to this afaik. It's something that lives in philosophy departments, not mathematics.
In the old days there were philosophers. They sought truth through thought and wisdom. Some started to measure reality and compare thought against that measurement, but real philosophers paid little heed for it didn't matter.
Testing their theory became known as physics. It turned out the testing against reality can be used without the theory, which we call engineering. We still have philosophers and they still pay little heed to measurement.
There is a similar disaster inbound for mathematicians. Formal mathematics is tedious and mostly only shows things that you were pretty sure were true anyway, so why bother.
It turns out the physicists and engineers have unsportingly built machines which are really, really good at the tedious bookwork required by formal mathematics. That is going to go exactly the same way that introducing measurement to philosophy went for the philosophers.
> There is a similar disaster inbound for mathematicians. Formal mathematics is tedious and mostly only shows things that you were pretty sure were true anyway, so why bother.
> It turns out the physicists and engineers have unsportingly built machines which are really, really good at the tedious bookwork required by formal mathematics. That is going to go exactly the same way that introducing measurement to philosophy went for the philosophers.
Formalized mathematics is just a very small subset of what mathematicians work on: in my observation people who work on (computer) formalized proofs often rather sit in CS departments.
this may be changing. as we speak. terence tao has recently been working to formalize existing work in lean. and has already found a bug in one of his published proofs. i think there will always ish be a need for mathematical proofs in the contemporary sense, but at this point i don't think its too implausible to suggest a future where the mathematical mainstream moves towards some hybrid mathematical exposition of computationally verified artefact.
https://mathstodon.xyz/@tao/111287749336059662
I think it's likely that axiomatic approaches, and formal approaches will continue to produce interesting results and have some effect on regular mathematics.
But this is very different to suggesting that most regular mathematics will switch to using formal proofs. There's a big ergonomics gap at the moment.
An analogy could be to look how pure mathematicians look down on applied mathematicians' work, the applied mathematicians don't care, they just get on with their own standards. You need regular mathematicians to choose to switch over en masse, what will compel them to do that?
3 replies →
The canonical paper on this was written by De Millo, et al., but it has a rather ungrammatical and incomprehensible title that does it no favors. However, the al. include Lipton and Perlis, so listen up...
Social Processes and Proofs of Theorems and Programs
https://dl.acm.org/doi/pdf/10.1145/359104.359106 [pdf]
I think the most exciting work in mathematics today is in the formal foundations. However, I can also understand mathematicians who are thinking like this:
1. I only need normal congruence
2. I only need perfect information games
Under problems that are solvable using these two assumptions, there is little benefit in tying proofs back to an axiomatic basis. Once you drop one of these two assumptions, proofs get much harder and a solid foundation gets more important.
> Now that part is pretty much solved, we are fairly confident that the foundations of mathematics are consistent.
Well, if you want them to be. I’m quite fascinated by the field of inconsistent mathematics, which attempts to show one can do real and useful mathematics given foundations which are at least tolerant of inconsistency (or even actually inconsistent)
https://plato.stanford.edu/entries/mathematics-inconsistent/
https://ir.canterbury.ac.nz/server/api/core/bitstreams/d6722...
Mathematicians have always been about equally certain that their foundations were consistent, but have gone from not knowing what they really were to expecting to eventually prove them consistent to knowing they can't be proven consistent.
> IMHO the problem now is that the foundations of mathematics are impractical. Most mathematicians don't actually do mathematics that can be tied back to the foundational axioms.
I suppose my followup to that is, why is this a problem? Does it lead to bad mathematics? (For what definition of “bad”?) Would we be able to solve unsolved problems if we tied things back to foundations more? Mathematicians have never really done much mathematics that ties back to the foundational axioms — why change? Rigor is not really much of an end in itself - surely people do mathematics because it is useful or beautiful, or both?
Much progress has been made in mathematics by demonstrating that problems in one realm are logically equivalent to solved problems in another realm, allowing proofs from the other realm to support proofs in the first. The most obvious example is the solution to Fermat's Last Theorem by Andrew Wiles, which wended far afield through the Taniyama–Shimura conjecture involving elliptic curves and modularity.
So in theory, with a solid foundation for mathematics, and new work rigorously tied back to it, this sort of thing should be both easier and more efficient.
The real benefit is certainty.
If somebody makes a proposition and nobody is able to properly vet it, or if you just get a bunch of white heads to nod sagely and endorse it, very little prevents there existing a flaw that invalidates the whole thing which might go unnoticed for decades and ruin anything anyone tries to build atop of the result.
It's the same as security hardening in code, or checking for med interactions at the pharmacists. Or checking to make sure that the absolutely correct pill is being dispensed and not mixed up "Buttle/Tuttle" style with some other similar looking pill.
Formal rigor paved all the way from the proposition to the axioms is not 100%, nothing is 100%. But formal rigor paved over that entire distance redistributes all potential for error to the proof chain itself which is now far easier for a non-AI computer to verify, and to the strength of the axioms themselves which have had more scrutiny paid to them than any other statements in the history of mathematics.
Another important point is that you do not have to go all the way back to the axioms if someone else has already formally proved other complex statements you can rely upon instead, then you only have to pave the length of your driveway to the nearest communal road.
And you can apply it, when you do, it really works.
I think AI for mathematics will help us solve unsolved problems. There should be a virtuous cycle during its development.
1. More formalized training data makes the math AI better
2. Better math AI helps us formalize more known mathematics
But right now math AI is not very useful. It isn't at the point where ChatGPT is for programming, where the AI is really helpful for a lot of programming tasks even if it isn't superhuman yet. So the virtuous cycle hasn't started yet. But I think we will get there.
> It seems like it should be possible, in theory, but it takes too much work.
Biologists don't model behaviors based on the four fundamental forces of physics.
It seems like it should be possible, in theory, but it takes too much work.
they don't, but they use chemistry
i guess that a point can be made that, even though chemistry is just applied quantum electrodynamics, people seldom derive it from first principles
One thing I would like point out with Gödel's incompleteness theorems, is that there are different notions of provability. Gödel uses the notion of "provability" you get from Provability Logic, which is a modal logic where you introduce terms from e.g. Sequent Calculus.
Recently I found another notion of "provability" where Löb's axiom, required in Provability Logic, is absurd. It turns out that this notion fits better with Intuitionistic Propositional Logic than Provability Logic. This allows integrating the meta-language into object-language. This is pretty recent, so I think we still have much to learn about the foundations of mathematics.
This makes it kind of sound as if Löb's axiom is used in the proof of Gödel's incompleteness theorems, but Gödel was working in (I think) PA, not in Provability Logic?
I guess you just meant "the notion of provability is the same as the one that would later be described in Provability logic" ?
I viewed the page you linked, but I don't see anywhere where you describe the alternate notion of "provability" you have in mind.
"Provability" in the sense of Gödel's incompleteness theorems, is just, "There exists a proof of it in [the chosen system of axioms + rules of inference]", yes? I don't see how any other notion would be fitting of the name "provability". "provable" , "possible to prove", "a proof exists", all the same thing?
Oh, I guess, if you want to use "there exists" in a specifically constructive/intuitionistic way? (so that to prove "[]P" you would have to show that you can produce a proof of P?)
I think I've seen other people try to do away with the need for a meta-language / make it the same as the object-language, and, I think this generally ends up being inconsistent in the attempts I've seen?
edit: I see you have written 'Self quality a ~~ a is equivalent to ~a, which is called a "qubit".'
I don't know quite what you mean by this, but, _please_ do not call this a qubit, unless you literally mean something whose value is a vector in a 2d Hilbert space.
> I guess you just meant "the notion of provability is the same as the one that would later be described in Provability logic" ?
yes
> I viewed the page you linked, but I don't see anywhere where you describe the alternate notion of "provability" you have in mind.
See section "HOOO EP"
>Oh, I guess, if you want to use "there exists" in a specifically constructive/intuitionistic way? (so that to prove "[]P" you would have to show that you can produce a proof of P?)
This would be `|- p` implies `□p`, which is the N axiom in modal logic (used by Provability Logic).
In Provability Logic, you can't prove `□false => false`. If you can prove this, then Löb's axiom implies `false`, hence absurd. `□p => p` for all `p` is the T axiom in modal logic. In IPL, if have `true |- false`, then naturally you can prove `false`. So, you can only prove `□false => false` if you already have an inconsistent theory.
Provability Logic is biased toward languages that assume that the theory is consistent. However, if you want to treat provability from a perspective of any possible theory, then favoring consistency at meta-level is subjective.
> I think I've seen other people try to do away with the need for a meta-language / make it the same as the object-language, and, I think this generally ends up being inconsistent in the attempts I've seen?
I have given this some thought before and I think it is based on fix-point results of predicates of one argument in Provability Logic. For example, in Gödel's proof, he needs to encode a predicate without arguments in order to create the Gödel sentence. In a language without such fix-points, this might not be a problem.
> I don't know quite what you mean by this, but, _please_ do not call this a qubit, unless you literally mean something whose value is a vector in a 2d Hilbert space.
The name "qubit" comes from the classical model, where you generate a random truth table using the input bit vector as seed. So, the proposition is in super-position of all propositions and hence behaves like a "qubit" in a classical approximation of a quantum circuit.
5 replies →
I'm attempting to understand you, so:
Are you saying that Löb's axiom, which states that the provability of "the provability of p implies p" implies the provability of p, necessarily prejudices some implicit assumption of consistency to the meta-language?
How so, and/or, what are the axioms, or derived properties, of this new notion of provability you have uncovered?
> Are you saying that Löb's axiom, which states that the provability of "the provability of p implies p" implies the provability of p, necessarily prejudices some implicit assumption of consistency to the meta-language?
Yes. One way to put it: Provability Logic is balancing on a knife-edge. It works, but just barely. However, you can turn it around and say the new notion is balancing on a knife-edge by requiring DAG (Directed Acyclic Graph) at meta-level. They way I see it, is that both approaches have implicit assumptions and you have to trade one with another.
I am working on an implementation of the new notion of provability (https://crates.io/crates/hooo), after finding the axioms last year (it took several months):
As a modal logic the difference is surprisingly small, by swapping Löb's axiom with T. `tauto_hooo_imply` is slightly stronger than K.
The major difference is that `|- p` equals `p^true` instead of implying, if you treat `|-` as internal. If you treat it as external, then you can think of it as N + T.
I needed this theory to handle reasoning about tautological congruent operators.
However, once you have this, you can perfectly reason about various modal logical theories by keeping separate modality operators, including Provability Logic, e.g. `modal_n_to : N' -> all(a^true => □a)`.
So, it is not a tradeoff that loses Provability Logic. Instead, you get a "finalized" IPL for exponential propositions. This is why I think of as a natural way of extending IPL with some notion of provability.
3 replies →
> Recently I found another notion of "provability"
Any pointers to that alternative notion?
I'm implementing it in this project: https://crates.io/crates/hooo
I've recently found out String Theory got so big partially because Calabi-Yau manifolds have some neat, mathematically tractable properties. (The one I somewhat understand is, being manifolds, they are locally smooth everywhere, which... doesn't seem like it should be a hard requirement for building blocks of reality.) Feels like that joke about looking for your lost keys under the streetlamp because there is light there, even though you lost the keys somewhere completely different.
> inevitable difference between "true" and "provable" (ie Godel's incompleteness). Now that part is pretty much solved, we are fairly confident that the foundations of mathematics are consistent.
Your assertion about the inevitability is true only under specific axioms. If you are willing to drop the notion that infinity exists, or the law of excluded middle (and maybe others), the problems that arise from Gödel's incompleteness theorems don't necessarily apply.
I'm very far from mathematician circles so I don't know how they actually interpret Gödel, but the idea of being "fairly confident" that math foundations are consistent while knowing that either there are true statements that we can't prove, or that the foundations aren't consistent, feels like a pretty weird thing to me.
I always thought most mathematicians just looked at Gödel like he discovered an interesting novelty, then ignored him and went back to doing whatever math they were already doing, instead of losing sleep over what his theorem implied.
That there are statements which might be true but we don't know how to prove was always the reality in mathematics. That this is not a temporary reality but something we're doomed to was thus not a huge revelation. Definitely not something I would expect anyone to lose sleep over.
Also, is it true that constructive mathamtics actually escapes Godel's theorems? My understanding was that any system strong enough to contain regular arithmetic is subject to it. Either way, constructive math is more limited in what it accepts as proofs, so there will be more, not fewer statements that are true but unprovable with such a system.
Classical logic embeds in intuitionistic logic by Gödel–Gentzen see https://en.m.wikipedia.org/wiki/Double-negation_translation