Comment by JoshTriplett
11 years ago
It seems notable that so many different variations of "naming large numbers" all end up mapping iterated applications of a function into a new function that takes a number of iterations as a parameter. Knuth's up-arrow notation defines iterated exponentiation, iterated iterated exponentiation, and so on. Ackermann's function makes the number of up-arrows just a numeric parameter to the function.
But you could go further than that: A(A(10)) is much bigger than A(1000), so you can get larger numbers faster by iterated applications of the Ackermann function. Turn the iteration into a function, and let B(n) be n applications of A to n. Iterated application of B would be even faster, so turn that iteration into a function: let C(n) be n applications of B to n.
But this process itself is iterative. So, define a new function: let iterA(n) be the nth level of iterated Ackermann functions applied to n, where iterA(1) is A(1), iterA(2) is B(2), iterA(3) is C(3), and so on.
Now, iterA can be applied iteratively. So, repeat again. The resulting function can be applied iteratively, so repeat again.
And this whole process of turning iteration into a function and then iterating that function is itself an iterative process, so it can be turned into a function...
In your notation B(n)=A[n](n), where [] denotes the number of iterations. So C(n)=B[n](n)=(A[n])[n](n)=A[nn](n), so really iterA(n)=A[n^n](n). I you repeat this procedure with iterA, then you get iterA2(n)=A[n^(n^n)](n) . While n^n, n^n^n and so on are definitely fast growing you are better of putting A into []. So you can write an extremely fast growing function as f(n)=A[A(n)](n). So one can improve your strategy by using the [] notation, resolving the recursion and putting large functions inside.
And of course my method can improved with iterations that also can be resolved by creating a new notation. It's really never ending.
A[A[A(n)](n)](n)
And it's all computable, so the Busy Beaver grows faster than any* of these.
So the competition is not really to name the biggest number but rather to write the deepest iterator in 1000 characters and apply it to the highest paradigm function. More like a programmer's challenge.
Let A = {fastest published sequence}. # This is a macro for BB, Ackermann's, etc.
Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^10(A, 10)
IMO you can't win big number competitions by such a low-level approach. At least notice that the concept of "deepest iterator in 1000 characters" is itself mathematically definable, and define it as D(1000) or something. Then try to find connections between that and BB numbers... Coming up with the "highest paradigm function" is the important part of the challenge, iteration is unnoticeable by comparison.
yes! I've had this exact thought before. Recursing and wrapping the total recursion as fast as you can is the fastest way.
Actually, the speed would be limited by your linguistic or programmer ability to concisely state the level of recursion you want (ideally the logical maximum) given all the previous inputs up until the recursion statement -- e.g. "now recurse steps a, b, and c and call that d" is not as strong as "imagine how the slope of the exponential curve increased between a and b, and the difference between that and b and c, and raise c to the power of both of those increases combined raised to the power of a, b, and c respectively" .. but obviously the second sentence is longer and more complex as well as "bigger".