Comment by Jerrrry
3 years ago
At a certain draw distance (probably/hopefully depending on framerate and other specs), the platform replaces the individual cells with the OTCA metapixel, effectively giving the illusion that it is GoL all the way down.
Neat, even when you know the trick.
The other trick is that the simulation speed scales with draw distance, such that once you've zoomed out a full cycle the simulation has gotten exactly 35328x faster, which is the period of the OTCA metapixel.
There's some lovely "engineering diagrams" of the OTCA metapixel on this (sparse) blog: http://otcametapixel.blogspot.com/2006/05/how-does-it-work.h.... It's fun to zoom in on the posted site and identify all the features, like the rule table near the bottom left.
For context, the OTCA metapixel is a large pattern (2048x2048) which is capable of simulating Conway's Game of Life rules (or, indeed, any rule set consisting of neighbour-counting birth/death conditions); it does this by having adjacent pixels coordinate sharing of state (whether they're on or off) and then looking up what to do via a (programmable) lookup table. Based on the current state (on or off), a series of "glider guns" will be conditionally activated, which creates the appearance of a filled center (filled with moving gliders).
It also appears to retain the current state of arbitrarily higher recursive instances, not that they move much when you're zoomed in. Makes me wonder, how do you code for the current 'position' in the whole stack in a way that remains consistent? Conceptually the space is very big. If you zoom/recurse in to a random metapixel 13 times in a row, you're almost certainly looking at a pixel that no other human has or ever will see, and you have scaled from the width of the observable universe to the width of a proton.
Yeah, this part is very clever and quite well-done IMHO. I suspect the trick is to only store the state for any levels you've actually seen.
As you go higher, they can just arbitrarily select a location in the simulation a few levels up from where you are that is consistent with the metapixels you've seen; once you go up a few levels, there's no chance of you having seen beyond a very small window of the simulation, so it's a matter of just finding a matching pattern.
As you go lower, the time step cannot be set be zero, so they can simply initialize the simulation a few levels down to an arbitrary state since the lower levels will tick exponentially faster.
The only problem is that you do have to store state for levels between the highest level you've seen and the current level as you zoom in. I suppose this means that if you zoom out a lot (just spam the scroll) and then zoom in a lot, there might be substantial memory usage. I've tested it and they do seem to consistently remember the exact state of the simulation at least a few levels up - it's easy to check by looking at the length of the clock train on the left of the metapixel in each level.
4 replies →
That's purely an optimization, though. Meaning, it would look just like this if it really was "GoL all the way down".
Thank you, I came here to ask for an explanation for how this was optimized.