← Back to context

Comment by andy_ppp

1 year ago

You're imagining LLMs don't just regurgitate and recombine things they already know from things they have seen before. A new variant would not be in the dataset so would not be understood. In fact this is quite a good way to show LLMs are NOT thinking or understanding anything in the way we understand it.

Yes, that's how you can really tell if the model is doing real thinking and not recombinating things. If it can correctly play a novel game, then it's doing more than that.

  • I wonder what the minimal amount of change qualifies as novel?

    "Chess but white and black swap their knights" for example?

    • I wonder what would happen with a game that is mostly chess (or chess with truly minimal variations) but with all the names changed (pieces, moves, "check", etc, all changed). The algebraic notation is also replaced with something else so it cannot be pattern matched against the training data. Then you list the rules (which are mostly the same as chess).

      None of these changes are explained to the LLM, so if it can tell it's still chess, it must deduce this on its own.

      Would any LLM be able to play at a decent level?

      1 reply →