← Back to context

Comment by c1b

13 hours ago

Yes it just collapses eventually — never stabilizes. The training process is flawed, I suspect it has to do with the fact that some weights blow up over time, you can see in “weights” tab.

But at around 4K avg score you should see it solve the env almost every time.

Just a demo :) optimized for speed over stability.

Reward structure: Step: -1 Dot: +100 Win: +1000 so ~4k is max theoretical score on 6x6.

maybe because it doesn't understand "done"? perfect play is impossible, random variance will cause scores to drop even if the model plays well and "wins". feels like it would get stuck in a loop trying to improve what can't be improved.

  • The optimizer doesn't need to understand anything it's just an iterated mathematical construct. The author simply didn't bother to implement the necessary details to ensure numerical stability.

    Alternatively it might be a problem with the scoring model in the end game.

  • feels like it would get stuck in a loop trying to improve what can't be improved.

    That is the point, there is nothing on an intention that we cannot improve, the goal here is no more than 1 unique iteration of the same path