Comment by alan-crowe

1 month ago

The review distills the book's view of the difference between pure mathematics and applied mathematics. "applied" split from "pure" to meet the technical needs of the US military during WW2.

My best example of the split is https://en.wikipedia.org/wiki/Symmetry_of_second_derivatives Wikpedia notes that "The list of unsuccessful proposed proofs started with Euler's, published in 1740,[3] although already in 1721 Bernoulli had implicitly assumed the result with no formal justification." The split between pure (Euler) and applied(Bernoulli) is already there.

The result is hard to prove because it isn't actually true. A simple proof will apply to a counter example, so cannot be correct. A correct proof will have to use the additional hypotheses needed to block the counter examples, so cannot be simple.

Since the human life span is 70 years, I face an urgent dilemma. Do I master the technique needed to understand the proof (fun) or do I crack on and build things (satisfaction)? Pure mathematicians are planning on constructing long and intricate chains of reasoning; a small error can get amplified into a error that matters. From a contradiction one can prove anything. Applied mathematics gets applied to engineering; build a prototype and discover problems with tolerances, material impurities, and annoying edge cases in the mathematical analysis. A error will likely show up in the prototype. Pure? Applied? It is really about the ticking of the clock.

I think that the problem is that theoretical real analysis is often presented like it's nothing but a validation of things people already knew to be true -- but maybe it's not?

The example you gave concerns differentiation. Differentiation is messy in real analysis because it's messy in numerical computing. How real analysis fixes this mess parallels how numerical computing must fix the mess. How do we make differentiation - or just derivatives, perhaps - computable?

The rock-bottom condition for computability is continuity. All discontinuous functions are uncomputable. It turns out that it is sufficient, to make your theorem hold, to have the 2nd partial derivatives f_{xy} and f_{yx} be continuous. They wouldn't even be computable otherwise!

One of the proofs provided uses integration. In numerical contexts, it is integration which is considered "easy", and "differentiation" which is considered hard. This is totally backwards to symbolic calculus.

The article also mentions Distribution Theory. This is important in the theory of linear PDEs. I suspect it is implicit in the algorithmic theory as well, whether practitioners have spelled this out or not. This is a theory that makes the differentiation operator itself computable, but at the cost of making the derivatives weaker than ordinary functions. How so? On the one hand, it allows to obtain things like the Dirac delta as derivatives, but those aren't even functions. On the other hand, these objects behave like functions - let's say f(x,y) - but we can't evaluate them at points; instead, we can take their inner product with test functions, which we can use to approximate evaluation. This is important because PDE solvers may only be able to provide solutions in the weak, distribution-theoretic sense.

I’m not sure if I am mathematically sophisticated enough to follow along but I’ll try. This chain of thought reminds me of the present state of cryptography, which is built on unproven assumptions about the computational hardness of certain problems. Meanwhile Satoshi Nakamoto hacks together some of these cryptographic components into a novel system for decentralized payments with a few hand-wavy arguments about why it will work and it grows into a $1+ trillion asset class.

  • The innovation on Bitcoin is not about cryptography but game-theory at work. For example, is it convenient for a miner to destroy the system or to continue mining? There are theoretical attacks at around 20%, not 51%. A state actor could also attack the system if they want to invest enough resources.

  • yes the cool thing about tech is that you don't have to know why it will work or even how, just so long as it does.

I took a look at the book a while ago, and I like how it treats abstraction as its guiding theme. For my project Practal (https://practal.com), I've recently pivoted to a new tagline which now includes "Designing with Abstractions". And I think that points to how to resolve the dilemma you point out between pure and applied: we soon will not have to decide between pure and applied mathematics. Designing (≈ applied math) will be done the most efficient way by finding and using the right abstractions (≈ pure math).

The chains of reasoning are only long and intricate if you trace each result back to axiomatics. Most meaningful results are made up of a handful of higher-level building blocks -- similar to how software is crafted out of modules rather than implementing low-level functionality from scratch each time (yes, similar but also quite different)

  • Literally the same:

    A type is a theorem and its implementation a proof, if you believe that Curry-Howard stuff.

    We “prove” (implement) advanced “theorems” (types) using already “proven” (implemented) bodies of work rather than return to “axioms” (machine code).

    • No, it is not the same, CH is just a particular instance of it, much like "shape" is not the same thing as "triangle".