Comment by dooglius

11 hours ago

I'm going to agree with the downvoted people and say that this sort of approach is largely meaningless if you allow arbitrary mappings. IMO the most reasonable mathematical formulation given the structure of the integers (in the sense of e.g. Peano) is that to truly represent an integer you have to represent zero and each other representable number has a representable predecessor, i.e. to say you can represent 5 you need 0,1,2,3,4, and 5 to be representable. By a straightforward counting argument, 2^64-1 is then the largest representable number, in other words the obvious thing is right.

As I've replies several times before, we don't allow arbitrary mappings. We allow computable mappings but consider only obviously non-cheating languages like Turing machines or lambda calculus or Linux's bc or any existing programming language, that are not geared toward outputting insanely large numbers.

  • It's not "the largest representable number" because you're not representing numbers in any rigorous sense. If I give you 64 bits, you can't tell me what number those bits represent (first, because the rules of the game are ambiguous - what if I give you 8 bytes that are a valid program in two different languages; and second, because even if you made the rules precise, you don't know which bitstrings correspond to programs that halt). And if I give you a number, you can't tell me which 64 bits represent that number or even if the number is representable, and that's true even for small numbers and even if I give you unbounded time.

    It seems far more natural to say that you're representing programs rather than numbers. And you're asking, what is the largest finite output you can get from a program in today's programming languages that is 8 bytes or less. Which is also fun and interesting!

    • > If I give you 64 bits, you can't tell me what number those bits represent

      You have to tell me the (non-cheating) programming language that the 64 bit program is written in as well.

      > And you're asking, what is the largest finite output you can get from a program in today's programming languages that is 8 bytes or less.

      That's what the post ends up saying, after first discussing conventional representations, and then explicitly widening the representations to programs in (non-cheating) languages.

  • I would say that all of those seem both arbitrary and geared toward outputting insanely large numbers (in the sense that the output of any Turing-complete language is). Now if you can make these claims in a mathematical rigorous way (i.e. without relying on a particular mapping like Turing Machines / Lambda Calculus, and without silly "up to a constant factor" cheats) then that would be more interesting.

    • Turing Machines and Lambda Calculus can only output insanely large numbers by building those numbers from scratch using their Turing completeness. So while lambda calculus can output something exceeding Loader's Number, it needs well over a thousand bits to do so. What I mean by "geared toward outputting insanely large numbers" is saying: I define a language in which the 1-bit program "0" outputs Loader's Number. That is obviously cheating.

      There is unfortunately no mathematically rigorous way to define what is cheating, so it seems unreasonable to ask me for that.

In the spirit of floating points, I'd say posits offer an excellent insight into the trade-offs between precision and accuracy, while being meaningfully representative of a number system rather than some arbitrary functions.

Your idea can't even represent 1/2. What good is that?

You're imposing an abitrary set of preferred numbers, which is boring and useless for measuring large things.