← Back to context

Comment by lumost

1 year ago

It could also be as simple as OAI experimenting on different datasets. Perhaps Chess games were included in some GPT-3.5 training runs in order to see if training on chess would improve other tasks. Perhaps afterwards it was determined that yes, LLMs can play chess - but no let's not spend time/compute on this.

Would be a shame, because chess is an excellent metric for testing logical thought and internal modeling. An LLM that can pick up and unique chess game half way through and play it ideally to completion is clearly doing more than "predicting the next token based on the previous one".