Comment by benlivengood

5 months ago

Wouldn't playthroughs for these games be potentially in the pretraining corpus for all of these models?

Reproducing specific chunks of long form text from distilled (inherently lossy) model data is not something that I would expect LLMs to be good at.

And of course, there's no actual reasoning or logic going on, so they cannot compete in this context with a curious 12 year old, either.