Comment by LegionMammal978
1 year ago
If there are many different ways to represent what something 'literally is', then how do we know for sure that ASCII '1' isn't a true representation of the literal number 1, just considered under different operations? We can say that 1 + 1 + 1 ≠ 1 (in Z), and we can also say that 1 + 1 + 1 = 1 (in Z/2Z): the discrepancy comes from two different "+" operations.
For that matter, how do we know what infinite sets like Z and Q 'literally are', without appealing to a system of axioms? The naive conception of sets runs headlong into Russell's paradox.
No comments yet
Contribute on Hacker News ↗