← Back to context

Comment by nicklecompte

1 year ago

I think HOTT helps remove a lot of the footguns and computational hairiness around using type-theoretic mathematics. But the issues Buzzard is talking about are really more profound than that. This relates to a much deeper (but familiar and nontechnical) philosophical problem: you assign a symbol to a thing, and read two sentences that refer to the same symbol ("up to isomorphism"), but not necessarily the same thing. If the sentences contradict each other, is it because they disagree on the meaning of the symbol, or do they disagree on the essence of the thing? There is a strong argument that most debates about free will and consciousness are really about what poorly-understood thing(s) we are assigning to the symbol. Worse, this makes such debates ripe for bad faith, where flaws in an argument are defended by implicitly changing the meaning of the symbol.

In the context of """simple""" mathematics, preverbal toddlers and chimpanzees clearly have an innate understanding of quantity and order. It's only after children fully develop this innate understanding that there's any point in teaching them "one," "two," "three," and thereby giving them the tools for handling larger numbers. I don't think it makes sense to say that toddlers understand the Peano axioms. Rather, Peano formulated the axioms based on his own (highly sophisticated) innate understanding of number. But given he spent decades of pondering the topic, it seems like Peano's abstract conception of "number" became different from (say) Kronecker's, or other constructivists/etc. Simply slapping the word "integer" on two different concepts and pointing out that they coincide for quantities we can comprehend doesn't actually do anything by itself to address the discrepancy in concept revealed by Peano allowing unbounded integers and Kronecker's skepticism. (The best argument against constructivism is essentially sociological and pragmatic, not "mathematically rational.")

Zooming out a bit, I suspect we (scientifically-informed laypeople + many scientists) badly misunderstand the link between language and human cognition. It seems more likely to me that we have extremely advanced chimpanzee brains that make all sorts of sophisticated chimpanzee deductions, including the extremely difficult question of "what is a number?", but to be shared (and critically investigated) these deductions have to be squished into language, as a woefully insufficient compromise. And I think a lot of philosophical - and metamathematical - confusion can be understood as a discrepancy between our chimpanzee brains having a largely rigorous understanding of something, but running into limits with our Broca's and Wernicke's areas, limits which may or may not be fixed by "technological development" in human language. (Don't even get me started on GPT...)

The level of insights and clearness in your message, and most comments in this thread actually, is surprising great compared to what I often read on HN.

Thank you very much for this