← Back to context

Comment by Timwi

2 years ago

I think what the commenter is saying is that, in time, language models too will do a lot more than string words together. If it's large enough, and you train it well enough to respond to “what's the best next move in this chess position?” prompts with good moves, it will inevitably learn chess.

I don't think that follows, necessarily. Chess has an unfathomable amount of states. While the LLM might be able to play chess competently, I would not say it has learned chess unless it is able to judge the relative strength of various moves. From my understanding, an LLM will not judge future states of a chess game when responding to such a prompt. Without that ability, it's no different than someone receiving anal bead communications from Magnus Carlsen.

  • An LLM could theoretically create a model with which to understand chess and predict a next move, you just need to adjust the training data and train the model until that behavior appears.

    The expressiveness of language lets this be true of almost everything.