Comment by pama

2 years ago

A minor detail here is that the analysis in the blog shows that the linear model built/trained on the the activations of an internal layer has a representation of the board that is probabilistic. Of course the full model is also probabilistic by design, though it probably has a better internal understanding of the state of the board than the linear projection used to visualize/interpret the internals of the model. There is no real meaning in the word "spatial" representation beyond the particular connectivity of the graph of the locations, which seems to be well understood by the model as 98% of the moves are valid, and this includes sampling with whatever probabilistic algorithm of choice that may not always return the best move of the model.

A different way to test the internal state of the model would be to score all possible valid and invalid moves at every position and see how the probabilities of these moves would change as a function of the player's ELO rating. One would expect that invalid moves would always score poorly independent of ELO, whereas valid moves would score monotonically with how good they are (as assessed by Stockfish) and that the player's ELO would stretch that monotonic function to separate the best moves from the weakest moves for a strong player.

> There is no real meaning in the word "spatial" representation beyond the particular connectivity of the graph of the locations

I don't think it makes sense to talk of the model (potentially) knowing that knights make L-shaped moves (i.e. 2 squares left or right, plus 1 square up or down, or vice versa) unless it is able to add/subtract row/column numbers to be able to determine the squares it can move to on the basis of this (hypothetical) L-shaped move knowledge.

Being able to do row/column math is essentially what I mean by spatial representation - that it knows the spatial relationships between rows ("1"-"8") and columns ("a"-"h"), such that if it had a knight on e1 it could then use this L-shaped move knowledge to do coordinate math like e1 + (1,2) = f3.

I rather doubt this is the case. I expect the board representation is just a map from square name (not coordinates) to piece on that square, and that generated moves likely are limited to those it saw the piece being moved make when it had been on the same square during training - i.e. it's not calculating possible, say, knight destinations base on an L-shaped move generalization, but rather "recalling" a move it had seen during training when (among other things) it had a knight on a given square.

Somewhat useless speculation perhaps, but would seem simple and sufficient, and an easy hypothesis to test.