Comment by ramly

2 years ago

Very valid take - few thoughts to share on this. And thanks for linking the writeup about Linjat's procedural generation. That was a great read. Many interesting insights on the approach the author used for finding solutions to the puzzles in his logic game. The solver method with its conceptual layers seems particularly interesting. I think the challenge with this type of 'forward' puzzle generation, however, is that it tends to lead to more trivial levels by default. The Linjat author actually admits this limitation in the same post, shortly after introducing the neat solver:

> "This process will not always produce a solution, but it's pretty fast (on the order of 50-100 microseconds) so it can just be repeated a bunch of times until it generates a level. Unfortunately it'll generally produce a mediocre puzzle. There are too many obvious moves right at the start, the board gets filled in very quickly and the solution tree is quite shallow."

This tracks well with my own earlier experiments and my intuition about the (in)consistency of generating interesting echo chess levels by solution-tracing one random walk at a time. That said, could this forward-pass procedural generation be improved upon by adding some discriminator/scoring system after it to make sure we're not falling for the most trivial of solutions? Absolutely. In fact, the author continues by explaining how Linjat does that as well.

> "The above process produced a mediocre puzzle. In the final stage, we use that level as a seed for an optimization process. The optimizer sets up a pool of up to 10 puzzle variants. The pool is initialized with the newly generated random puzzle. On each iteration, the optimizer selects one puzzle from the pool and mutates it. [...] We then run the solver in the special level-generation mode [...] to make it solvable again. After that, we run the solver again, this time in the normal mode. [...] The new puzzle is then added to the pool. If the pool ever contains more than 10 puzzles, the worst one is discarded. This process is repeated a number of times (anything from 10k to 50k iterations seemed to be fine). After that, the version of the puzzle with the highest score is saved into the puzzle's level database."

In other words, what we'd truly be doing is: (1) some iterative process to generate a (trivially) solvable walk, (2) add random mutations to make the (solvable) level interesting, (3) readjust to make the (interesting) solvable again, (4) rinse and repeat until we have a (hopefully) interesting and solvable level.

To be clear, there's nothing wrong with this approach - it could even give us more granular control of difficulty curving explicitly. Indeed, with the right tweaks, I bet some solid solver + optimizer process could be devised to generate non-trivially solvable echo chess levels starting from a forward random walk.

Philosophically, in fact, the last step of the Linjat optimizer sounds actually pretty similar to the way the current 'Endless' mode in echo chess selects a level to be served to the player. 50 random gens get evaluated by the ML ensemble, they get ranked by majority-voting of solvability first, and probability of being solvable second, and the level with the highest score is the one that gets committed to the game.

Granted, the current echo chess generator directly starts from a parametrized random pool of 'mutations' before running them through its solvability discriminator, as opposed to starting from a definitionally solvable forward walk kernel to be mutated. Mutations first, solvability second. But the result is the same. And given that the starting pool is parametrized, there is still ample control (implicitly at least) in adjusting the distribution of components like obstacles, piece types, topology, etc. to vary the expected difficulty of the seeds.

TL;DR: we can start from random weights or fine-tune promising ones. But we still need to do something to get non-trivial levels generated.