Comment by bobmichael

2 years ago

That's a brilliantly simple approach. You could even have an estimated difficulty score that updates based on number of moves, switches, etc. and ramp up target difficulty as the random run progresses.

Yep. Even the boring, topology-agnostic 'macro' variables can be good proxies for difficulty: obstacles count, empty squares count, counts per piece type, etc. I don't have a direct source for this but the original echo chess post has a good EDA section (+ feature importance plots) covering the impact of such variables on solvability. At scale, average solvability of random levels might be a half-decent proxy for how the difficulty of individual ones would fluctuate.