Comment by zkmon
3 days ago
I just gave the reason - The notion of comparison and 1-to-1 mapping has an underlying assumption about the subjects being quantifiable and identifiable. This assumption doesn't apply to something inherently neither quantifiable nor is a cut in the continuum, similar to a number. What argument are you offering against this?
I'm not the person you replied to, and I doubt I'm going to convince you out of your very obviously strong opinions, but, to make it clear, you can't even define a continuum without a finite set to, as you non-standardly put it, cut it. It turns out, when you define any such system that behaves like natural numbers, objects like the rationals and the continuum pop out; explicitly because of situations like the one Cantor describes (thank you, Yoneda). The point of transfinite cardinalities is not that they necessarily physically exist on their own as objects; rather, they are a convenient shorthand for a pattern that emerges when you can formally say "and so on" (infinite limits). When you do so, it turns out, there's a consistent way to treat some of these "and so ons" that behave consistently under comparison, and that's the transfinite cardinalities such as aleph_0 and whatnot.
Further, all math is idealist bullshit; but it's useful idealist bullshit because, when you can map representations of physical systems into it in a way that the objects act like the mathematical objects that represent them, then you can achieve useful predictive results in the real world. This holds true for results that require a concept of infinities in some way to fully operationalize: they still make useful predictions when the axiomatic conditions are met.
For the record, I'm not fully against what you're saying, I personally hate the idea of the axiom of choice being commonly accepted; I think it was a poorly founded axiom that leads to more paradoxes than it helps things. I also wish the axiom of the excluded middle was actually tossed out more often, for similar reasons, however, when the systems you're analyzing do behave well under either axiom, the math works out to be so much easier with both of them, so in they stay (until you hit things like Banac-Tarsky and you just kinda go "neat, this is completely unphysical abstract delusioneering" but, you kinda learn to treat results like that like you do when you renormalize poles in analytical functions: carefully and with a healthy dose of "don't accidentally misuse this theorem to make unrealistic predictions when the conditions aren't met")
About the 1-to-1 mapping of elements across infinite sets: what guarantees us that this mapping operation can be extended to infinite sets?
I can say it can not be extended or applied, because the operation can not be "completed". This is not because it takes infinite time. It is because we can't define completion of the operation, even if it is a snapshot imagination.
It's an axiom (the axiom of choice, actually). A valid way of viewing an axiom is not dissimilar to a "modeling requirement" or an "if statement". By that I mean, for example with the axiom of choice: it is just a formal statement version of "assume that you can take an element from a (possibly infinite) collection of sets such that you can create a new set (the new set does not have to be unique)." It makes intuitive sense for most finite sets we deal with physically, and, for infinite sets, it can actually make sense in a way that actually successfully predicts results that do hold in the real world and provides a really convenient way to define a lot of consistent properties of the continuum itself.
However, if you're dealing with a problem where you can't always usefully distinguish between elements across arbitrary set-like objects; then it's not a useful axiom and ZFC is not the formalism you want to use. Most problems we analyze in the real world, that's actually something that we can usefully assume, hence why it's such a successful and common theory, even if it leads to physical paradoxes like Banac-Tarsky, as mentioned.
Mathematicians, in practice, fully understand what you mean with your complaint about "completion," but, the beauty of these formal infinities is the guarantee it gives you that it'll never break down as a predictive theory no matter the length of time or amount of elements you consider or the needed level of precision; the fact that it can't truly complete is precisely the point. Also, within the formal system used, we absolutely can consistently define what the completion would be at "infinity," as long as you treat it correctly and don't break the rules. Again, this is useful because it allows you to bridge multiple real problems that seemingly were unrelated and it pushes "representative errors" to those paradoxes and undefined statements of the theory (thanks, Gödel).
If it helps, the transfinite cardinalities (what you call infinity) you are worried about are more related to rates than counts, even if they have some orderable or count-like properties. In the strictest sense, you can actually drop into archimedian math, which you might find very enjoyable to read about or use, and it, in a very loose sense, kinda pushes the idea of infinity from rates of counts to rates of reaching arbitrary levels of precision.