Comment by tromp
10 hours ago
As I've replies several times before, we don't allow arbitrary mappings. We allow computable mappings but consider only obviously non-cheating languages like Turing machines or lambda calculus or Linux's bc or any existing programming language, that are not geared toward outputting insanely large numbers.
It's not "the largest representable number" because you're not representing numbers in any rigorous sense. If I give you 64 bits, you can't tell me what number those bits represent (first, because the rules of the game are ambiguous - what if I give you 8 bytes that are a valid program in two different languages; and second, because even if you made the rules precise, you don't know which bitstrings correspond to programs that halt). And if I give you a number, you can't tell me which 64 bits represent that number or even if the number is representable, and that's true even for small numbers and even if I give you unbounded time.
It seems far more natural to say that you're representing programs rather than numbers. And you're asking, what is the largest finite output you can get from a program in today's programming languages that is 8 bytes or less. Which is also fun and interesting!
> If I give you 64 bits, you can't tell me what number those bits represent
You have to tell me the (non-cheating) programming language that the 64 bit program is written in as well.
> And you're asking, what is the largest finite output you can get from a program in today's programming languages that is 8 bytes or less.
That's what the post ends up saying, after first discussing conventional representations, and then explicitly widening the representations to programs in (non-cheating) languages.
I would say that all of those seem both arbitrary and geared toward outputting insanely large numbers (in the sense that the output of any Turing-complete language is). Now if you can make these claims in a mathematical rigorous way (i.e. without relying on a particular mapping like Turing Machines / Lambda Calculus, and without silly "up to a constant factor" cheats) then that would be more interesting.
Turing Machines and Lambda Calculus can only output insanely large numbers by building those numbers from scratch using their Turing completeness. So while lambda calculus can output something exceeding Loader's Number, it needs well over a thousand bits to do so. What I mean by "geared toward outputting insanely large numbers" is saying: I define a language in which the 1-bit program "0" outputs Loader's Number. That is obviously cheating.
There is unfortunately no mathematically rigorous way to define what is cheating, so it seems unreasonable to ask me for that.