← Back to context

Comment by sanxiyn

2 years ago

I mean, this seems obvious to me. How would the model predict the next move WITHOUT calculating the board state first? Yes, by memorization, but memorization hypothesis is easily rejected by comparison to training dataset in this case.

It is possible the model calculates an approximate board state, which is different from the board state but equivalent for most games, but not all games. It would be interesting to train adversarial policy to check this. From KataGo attack we know this does happen for Go AIs: Go rules have a concept of liberty, but so called pseudoliberty is easier to calculate and equivalent for most cases (but not all cases). In fact, human programmers also used pseudoliberty to optimize their engines. Adversarial attack found Go AIs also use pseudoliberty.

Surprisingly many people seem to believe LLMs cannot form any deeper world models beyond superficial relationships between words, even if figuring out a "hidden" model allows for a big leap in prediction performance – in this case, a hypothesis corresponding to chess rules happens to be give the best bang for the buck for predicting strings that have chess notation structure.

But the model could in principle just have learned a long list of rote heuristics that happen to predict notation strings well, without having made the inferential leap to a much simpler set of rules, and a learner weaker than a LLM could well have got stuck at that stage.

I wonder how well a human (or a group of humans) would fare at the same task and if they could also successfully reconstruct chess even if they had no prior knowledge of chess rules or notation.

(OTOH a GPT3+ level LLM certainly does know that chess notation is related to something called "chess", which is a "game" and has certain "rules", but to what extent is it able to actually utilize that information?)

It’s one thing to think it’s obvious, but quite another to prove it. I think this is the true value of this kind of work, is that it’s helping to decipher what these models are actually doing. Far too often we hear “NNs / LLMs are black boxes” as if that’s the end of the conversation.

> It is possible the model calculates an approximate board state

Yes - this is exactly what the probes show.

One interesting aspect is that it still learns to play when trained on blocks of move sequences starting from the MIDDLE of the game, so it seems it must be incrementally inferring the board state by what's being played rather than just by tracking the moves.