Comment by zkmon

14 days ago

We are entering into a probabilistic era where things are not strictly black and white. Things are not binary. There is no absolute fake.

A mathematical proof is an assertion that a given statement belongs to the world defined by a set of axioms and existing proofs. This world need not have strict boundaries. Proofs can have probabilities. Maybe Reimann's hypothesis has a probability of 0.999 of belonging to that mathematical box. New proofs that would have their own probability which is a product of probabilities of the proofs they depend on. We should attach a probability and move on. Just like how we assert that some number is probably prime.

Definitely not.

"Probability" does not mean "maybe yes, maybe not, let me assign some gut feeling value measuring how much I believe something to be the case." The mathematical field of probability theory has very precise notions of what a probability is, based in a measurable probability space. None of that applies to what you are suggesting.

The Riemann Hypothesis is a conjecture that's either true or not. More precisely, either it's provable within common axioms like ZFC or its negation is. (A third alternative is that it's unprovable within ZFC but that's not commonly regarded as a realistic outcome.)

This is black and white, no probability attached. We just don't know the color at this point.

  • >> "Probability" does not mean "maybe yes, maybe not, let me assign some gut feeling value measuring how much I believe something to be the case."

    That's exactly what Baeysian probabilities are: gut feelings. Speaking of values attached to random variables, a good Bayesian basically pulls their probabilities out their ass. Probabilities, in that context, are nothing but arbitrary degrees of belief based on other probabilities. That's the difference with the frequentist paradigm which attempts to set the values of probabilities by observing the frequency of events. Frequentists ... believe that observing frequencies is somehow more accurate than pulling degrees of belief out one's ass, but that's just a belief itself.

    You can put a theoretical sheen on things by speaking of sets or probability spaces etc, but all that follows from the basic fact that either you choose to believe, or you choose to believe because data. In either case, reasoning under uncertainty is all about accepting the fact that there is always uncertainty and there is never complete certainty under any probabilistic paradigm.

    • Baffling to see such a take on HN.

      If I give you a die and ask about the probabiliy for a 6, then it's exactly 1/6. Being able to quantify this exactly is the great success story of probability theory. You can have a different "gut feeling", and indeed many people do (lotteries are popular), but you would be wrong. If you run this experiment a large number of times, then about 1/6 of the outcomes will be a 6, proving the 1/6 right and the deviating "gut feeling" wrong. That number is not "pulled out of somebody's ass" or some frequentist approach. It's what probability means.

      9 replies →

  • It's time that mathematics need to choose it's place. Physical world is grainy and probabilistic at quantum scale and smooth amd deterministic at larger scale. Computing world is grainy and deterministic at its "quantum" scale (bits and pixels) and smooth and probabilistic at larger scale (AI). Human perception is smooth and probabilistic. Which world does mathematics model or represent? It has to strongly connect to either physical world or computing world. For being useful to humans, it needs to be smooth and probabilistic, just like how computing has become.

    • > Physical world is grainy and probabilistic at quantum scale and smooth amd deterministic at larger scale.

      This is almost entirely backwards. Quantum Mechanics is not only fully deterministic, but even linear (in the sense of linear differential equations) - so there isn't even the problem of chaos in QM systems. QFT maintains this fundamental property. It's only the measurement, the interaction of particles with large scale objects, that is probabilistic.

      And there is no dilemma - mathematics is a framework in which any of the things you mentioned can be modeled. We have mathematics that can model both deterministic and nondeterministic worlds. But the mathematical reasoning itself is always deterministic.

What you're hinting at is the fact that proofs created by human mathematicians are not complete proofs but rather sketch proofs whose purpose is to convince mathematicians (including the person deriving the proof) that a statement (like the Reimann hypothesis) is true. Such human-derived proofs can even be wrong, as they sometimes turn out to be, so just because a proof is given, doesn't mean we have to automatically believe what it proves.

In that sense, proofs can be seen as evidence that a statement is true, and since one interpretation of Bayesian probabilities is that they express degrees of belief about the truth of a formal statement, then yes, proofs have something to do with probabilities.

But, in that context, it's not proofs that probabilities should be attached to. Rather, we can assign some probability to a formal statement, like the Reimann hypothesis, given that a proof exists. The proof is evidence that the statement is true and we can adjust our belief in the truth of the statement according to this and possibly other lines of evidence. In particular, if there are multiple and different proofs of the same statement that can increase our certainty that the statement is true.

The thing to keep in mind is that computers can derive complete proofs, in the sense that they can mechanically traverse the entire deductive closure of a statement given the axioms of a theory, and determine whether the statement is a theorem (i.e. true) or not but without skipping or fudging any steps, however trivial. This is what automated theorem provers do.

But it's important to keep in mind that LLMs don't do that kind of proof. They give us at best sketch proofs like the ones derived by human mathematicians, with the added complication that LLMs themselves cannot distinguish between a correct proof (i.e. one where every step, however fudgy, follows from the ones before it) and an incorrect one, or an automated theorem prover, are still required to check the correctness of a proof. LLM-based proof systems like AlphaProof work that way, passing an LLM-generated proof to an automated theorem prover as a verifier.

Mechanically-derived, complete proofs like the ones generated by automated theorem provers can also be assigned degrees of probability, but once we are convinced of the correctness of a prover (... because we have a proof!) then we can trust the proofs derived by that prover, and have complete belief in the truth of any statements derived.