Comment by btilly
1 month ago
This is a stack overflow question that I turned into a blog post.
It covers both the limits of what can be proven in the Peano Axioms, and how one would begin bootstrapping Lisp in the Peano Axioms. All of the bad jokes are in the second section.
Corrections and follow-up questions are welcome.
After putting on my boots and wading through all of that I think that you have one edit to make.
In the "Why Lisp?" section there is a bit of basic logic defined. The first of those functions appears to have unbalanced parentheses.
>(defun not (x) > (if x > false > true)
I have a compulsion that I can't control when someone starts using parentheses. I have to scan to see who does what with who.
You later say in the same section
>But it is really easy to program a computer to look for balanced parentheses
Okay. This is pretty funny. Thanks for the laugh. I realize that you weren't doing that but it still is funny that you pointed out being able to do it.
This later comment in the "Basic Number Theory" section was also funny.
>; After a while, you stop noticing that stack of closing parens.
I really enjoyed reading this post. Great job. Though it has been a long time since I did anything with Lisp I was able to walk through and get the gist again.
Thank you for the correction!
And I'm glad that someone liked some of my attempts at humor. As my wife sometimes says, "I know that you were trying to make a joke, because it wasn't funny."
It was a great read. I enjoyed how you laid it all out. It reminded me of some of my upper level math coursework. Easy to follow if you take it step by step and stop to consider the implications. Some things become obvious to the most casual observer.
I found all the jokes funny as well. Thanks for the blog post. Extremely nice read. I love the approach.
"I think you liked it just fine." hehe
> I have to scan to see who does what with who.
Are you saying that parentheses introduce the problem of having to scan to see what goes with what?
As in, if we don't have parentheses, but still have recursive/nested structure, we don't have to scan?
For myself the issue goes back to my college mathematics courses, especially differential equations. I worked those homework problems by hand on a large format tablet, roughly 24" x 36", carefully laying them out step by step so that I could walk through them in the future and make sense of the solution process. Counting and matching parentheses was pretty critical since a missed parenthesis may not pop out at you like it would in a compiler error or by walking through code.
I automatically count and match even today, 40 years later.
Python block indentation is an example of nested structure that's at least easier to visually scan. You don't need to count opening/closing parens, just look down a column of text - assuming nobody mixed tabs and spaces. (but I wouldn't go as far as saying you don't need to scan it)
1 reply →
Thanks for this. In another strange internet coincidence, I was asking ChatGPT to break down the fundamentals of the Peano axioms just yesterday and now I see this. Thumbs up!
I suspect in the near future (if not already) ChatGPT data will be sold to data brokers and bough by Amazon such that writing a prompt will end up polluting Alexa product recommendations within a few minutes to hours.
I suspect that in the near future nobody will directly read product recommendations.
Oh for fuck sake
Can we not ruin every technology we develop with ads?
10 replies →
Well there was a post on mathmemes a day ago about teaching kids set theory as a foundation for math with some discussion of PA. So maybe related ideas are echoing across the intertubes in this moment?
> teaching kids set theory as a foundation for math
Very reminiscent of the New Math pedagogy of the 1960s. Built up arithmetic entirely from sets. Crashed and burned for various reasons but I always had a soft spot for it. It also served as my introduction to binary arithmetic and topology.
I've noticed this too. I will be researching a topic elsewhere and then it seems to pop up in HN. Am I just looking for patterns where there are none, or is there some trickery happening where HN tracks those activities and mixes in posts more relevant to my interests with the others?
HN has a single front page for everyone, so it's a recency illusion. You pay more attention to details and skim over titles you haven't been thinking about.
1 reply →
This proves that ChatGPT sells your data to HN which then decides which posts to put on the front page.
Incidentally, this also proves that GP is the main character.
This is fascinating! I haven't read much past the intro yet, but I find the whole premise that you can prove all specific instances of Goodstein sequences terminate at 0 within PA, but not that all sequences terminate (it's a trivial result but still interesting).
I also find it super weird that Peano axioms are enough to encode computation. Again, this might be trivial if you think about it, but that's one self-referential layer more than I've thought about before.
One question for you btilly - oddly enough, I just recently decided to learn more Set Theory, and actually worked on an Intro to Set Theory textbook up to Goodstein sequences just last week. I'm a bit past that.
Do you have a good recommendation for a second, advanced Set Theory textbook? Also, any recommendation for a textbook that digs into Peano arithmetic? (My mini goal since learning the proof for Goodstein sequences is to work up to understanding the proof that Peano isn't strong enough to prove Goodstein's theorem, though I'll happily take other paths instead if they're strongly recommended.)
My apologies for having missed this request.
I don't have good suggestions for a good set theory textbook. Grad school was 30 years ago, and I didn't specialize in logic.
The best set theory book that I read was https://www.amazon.com/Naive-Theory-Undergraduate-Texts-Math.... But that one is aimed at people who want to go into math but do not wish to specialize in set theory, and not at people who want to actually learn set theory.
Thanks! I read through a bunch of that textbook when I first decided to learn more maths a few years ago, but I relatively quickly started dual-reading a different textbook ("Introduction to Set Theory" by Hrbacek and Jeck), which is a more in-depth, non-naive introduction. That's the book I'm continuing to work through now.
1 reply →
Boot sector Lisp bootstraps itself.
https://justine.lol/sectorlisp2/
Also, lots of Lisp from https://t3x.org implement numerals (and the rest of stuff) from cons cells and apply/eval:
'John McCarthy discovered an elegant self-defining way to compute the above steps, more commonly known as the metacircular evaluator. Alan Kay once described this code as the "Maxwell's equations of software". Here are those equations as implemented by SectorLISP:
ASSOC EVAL EVCON APPLY EVLIS PAIRLIS '
Ditto with some Forths.
Back to T3X, he author has Zenlisp where the meta-circular evaluation it's basically how to define eval/apply and how to they ared called between themselves in a recursive way.
http://t3x.org/zsp/index.html
I knew that this sort of stuff was possible, but it is fun to see it.
When it comes to bootstrapping a programming language from nothing, the two best options are Lisp and Forth. Of the two, I find Lisp easier to understand.
> Corrections and follow-up questions are welcome.
There are two places where you accidentally wrote “omega” instead of “\omega”.
Thanks!
I am away from my computer for the day, but I will fux it later.
> I will fux it later
I think the problem is that it's already fuxed.
It is now fixed. :-)
Hey! This is fantastic and actually ties in some very high disparate parts of math. Basically, reorient & reformulate all of math/epistomology around discrete sampling the continuum. Invert our notions of Aleph/Beth/Betti numbers as some sort of triadic Grothendieck topoi that encode our human brain's sensory instruments that nucleate discrete samples of continuum of reality (ontology)
Then every modal logic becomes some mapping of 2^(N) to some set of statements. The only thing that matters is how predictive they are with some sort of objective function/metric/measure but you can always induce an "ultra metric" around notions of cognitive complexity classes i.e. your brain is finite and can compute finite thoughts/second. Thus for all cognition models that compute some meta-logic around some objective F, we can motivate that less complex models are "better". There comes the ultra measure to tie disparate logic systems. So I can take your Peano Axioms and induce a ternary logic (True, False, Maybe) or an indefinite-definite logic (True or something else entirely). I can even induce bayesian logics by doing power sets of T/F. So a 2x2 bayesian inference logic: (True Positive, True Negative, False Positive, False Negative)
Fun stuff!
Edit: The technical tldr that I left out is unification all math imho: algebraic topology + differential geometry + tropical geometry + algebraic analysis. D-modules and Microlocal Calculus from Kashiwara and the Yoneda lemma encode all of epistemology as relational: either between objects or the interaction between objects defined as collision less Planck hyper volumes.
basically encodes the particle-wave duality as discrete-continuum and all of epistemology is Grothendieck topoi + derived categories + functorial spaces between isometry of those dual spaces whether algebras/coalgebra (discrete modality) or homologies/cohomologies (continuous actions)
Edit 2: The thing that ties everything together is Noether's symmetry/conserved quantities which (my own wild ass hunch) are best encoded as "modular forms", arithmetic's final mystery. The continuous symmetry I think makes it easy to think about diffeomorphisms from different topoi by extracting homeomorphisms from gauge invariant symmetries (in the discrete case it's a lattice, but in the continuous we'd have to formalize some notion of liquid or fluid bases? I think Kashiwara's crystal bases has some utility there but this is so beyond my understanding )
> Invert our notions of Aleph/Beth/Betti numbers as some sort of triadic Grothendieck topoi that encode our human brain's sensory instruments that nucleate discrete samples of continuum of reality (ontology)
There’s probably ten+ years of math education encoded in this single sentence?
My apologies to ikrima for being critical, but I think anyone who thinks "aleph/beth/Betti numbers" is a coherent set of things to put together is just very confused.
Aleph and beth numbers are related things, in the field of set theory. (Two sequences[1] of infinite cardinal numbers. The alephs are all the infinite cardinals, if the axiom of choice holds. The beth numbers are the specific ones you get by repeatedly taking powersets. They're only all the cardinals if the "generalized continuum hypothesis" holds, a much stronger condition.)
[1] It's not clear that this is quite the right word, but no matter.
Betti numbers are something totally different. (If you have a topological space, you can compute a sequence[2] of numbers called Betti numbers that describe some of its features. (They are the ranks of its homology groups. The usual handwavy thing to say is that they describe how many d-dimensional "holes" the space has, for each d.)
[2] This time in exactly the usual sense.
It's not quite true that there is no connection between these things, because there are connections between any two things in pure mathematics and that's one of its delights. But so far as I can see the only connections are very indirect. (Aleph and beth numbers have to do with set theory. Betti numbers have to do with topology. There is a thing called topos theory that connects set theory and topology in interesting ways. But so far as I know this relationship doesn't produce any particular connection between infinite cardinals and the homology groups of topological spaces.)
I think ikrima's sentence is mathematically-flavoured word salad. (I think "Betti" comes after "beth" mostly because they sound similar.) You could probably take ten years to get familiar with all the individual ideas it alludes to, but having done so you wouldn't understand that sentence because there isn't anything there to understand.
BUT I am not myself a topos theorist, nor an expert in "our human brain's sensory instruments". Maybe there's more "there" there than it looks like to me and I'm just too stupid to understand. My guess would be not, but you doubtless already worked that out.
[EDITED to add:] On reflection, "word salad" is a bit much. E.g., it's reasonable to suggest that our senses are doing something like discrete sampling of a continuous world. (Or something like bandwidth-limited sampling, which is kinda only a Fourier transform away from being discrete.) But I continue to think the details look more like buzzword-slinging than like actual insight, and that "aleph/beth/Betti" thing really rings alarm bells.
9 replies →
you know what, I nerd sniped myself, here's a more fleshed out sketch of the [Discrete Continuum Bridge
https://github.com/ikrima/topos.noether/blob/master/discrete...
5 replies →
well, you're in luck because I'm about to make a fool of myself in trying to tease Terence Tao over at https://mathstodon.xyz/@mathemagical
Wish me luck!
You know what, since you put in all that work, here's my version using p-adic geometry to generalize the concept of time as a local relativistic "motive" (from category theory) notion of ordering (i.e. analogous to Grothendieck's generalization of functions as being point samples along curves of a basis of distributions to generalize notions of derivatives):
https://github.com/ikrima/topos.noether/blob/aeb55d403213089...