Comment by sureglymop
9 hours ago
I think for most people the issue is that they never even get to the fun stuff. I remember not really liking math right until university where we had set theory in the first semester, defined the number sets from scratch went on to monoids, groups, rings etc. That "starting from scratch" and defining everything was extremely satisfying!
Interesting, I somewhat of an opposite reaction, although I am certainly not a mathematician. Once everything became definitions, my eyes glazed over - in most cases the rationale for the definitions was not clear and the definitions appeared over-complicated.
It took me some time, but now it's a lot better -- like a little game I somewhat know the rules of. I now accept that mathematicians are often worrying about maximal abstraction or addressing odd pathological corner cases. This allows me to wade through the complexity without getting overwhelmed like I used to.
My dad always told me growing up today math was like a game and a puzzle, and I hated that. I also hated math at the time. It felt more like torture than a game.
I didn't fall in love with math until Statistics, Discrete Math, Set Theory and Logic.
It was the realization that math is a language that can be used to describe all the patterns of real world, and help cut through bullshit and reckon real truths about the world.
If you’re interested in computer science, have you ever looked at the Software Foundations course by UPenn? It follows a similar approach of having you build all sorts of fascinating math principles and constructions from the ground up. But then it keeps going, all the way up to formal methods of software analysis and verification.
https://softwarefoundations.cis.upenn.edu/
Yes, I agree! And also that a lot of the fun stuff is hidden behind historically opaque terminology. Although I'm also sympathetic to the fact that writing accessible explanations is a separate and hard to master skill. Once you understand something it can be really hard to step back into the mindset of not understanding it and figuring out an explanation that would make the idea "click".
I think a lot of maths is secretly a lot easier than it appears, but just missing an explanation that makes it easy to get the core idea to build upon.
For example, I've been meaning to write an explorable[0] for explaining positional notation in any integer base (so binary, hexadecimal, etc) in a way that any child who can read clocks should be able to follow. Possibly teaching multiplication along the way.
Conceptually it's quite simple: imagine a counter that looks like an analog clock, but with the digits 0 to 9 and a +1 and -1 button. We can use it to count between zero and nine, but if we add one to nine, we step back to zero. Oh no! Ok, but we can solve this by adding a second counter. Whenever the first counter does a full circle, we increase it by one. A full circle on the first counter is ten steps, so each step on the second counter represents ten steps. But what if the second counter wants to count ten steps? No problem, just add a third! And so on.
So then the natural question is... what if we have fewer digits than 0 to 9? Like 0 to 7? Oh, we get octal numbers. 0 and 1 is binary. Adding more digits using letters from the alphabet?
The core approach is just a very physical representation of base-10 positional, which hopefully it makes it easy to do the counting and follow what is happening. No "advanced" concepts like "base" or "exponentiation" needed, but those are abstractions that are easy to put on top when they get older.
I've asked around with friends who have kids - most of them learn to read clocks somewhere between four and six, and by the time they're eight they can all count to 100. So I would expect that in theory this approach would make the idea of binary and hexadecimal numbers understandable at that age already.
EDIT: funny enough the article also mentions that precisely thanks to positional notation, almost every adult can immediately answer the question "what is one billion minus one".
[0] https://explorabl.es/
In college I took Formal Logic II as it fulfilled requirements in both my Comp Sci and Phil major. It turned out that PHIL 104 was cross listed as MATH 562, because the professor who taught Logic I was allowed to teach whatever he wanted for the followup class. I had technically taken the prereq, which was a basic CS logic course, but I was in way over my head. It was one of the most fun courses I took in college.
We were given the exact text of the final exam weeks in advance, and were allowed to do anything at all to prepare, including collaborating with the other students or asking other professors (who couldn't make heads or tails of it). The goal was to be able to answer 1 or 2 out of the 10 questions on the exam, and even if you couldn't you got a B+ at minimum.
I wish I had a better memory, but I believe one of the questions I successfully answered was to prove Post's Theorem using Turing machines? The problem is, I never used the knowledge from that class again, but to this day I still think about it. It would be amazing to go back and learn more about that fascinating intersection of philosophy and computer science.
What I loved the most was that it combined hard math with the kind of esoteric metaphysical questions about mathematics which many practitioners despise because they feel like it undermines their work. It turns out, when you go that deep it's impossible not to touch on the headier stuff.
totally agree! in high school, lots of things were vaguely defined. I remember, I didn't fully understand what "f o g" was until I was given the definition of a monoid. Also limits and derivations, once you get the proper definition, you can pretty easily derive all the formulas and theorems you use in high school. Also in high school, we mostly did calculations and simple deductions, but at university we were proving everything. Nice change of perspective.