Comment by shaky-carrousel
1 year ago
Yes, that's how you can really tell if the model is doing real thinking and not recombinating things. If it can correctly play a novel game, then it's doing more than that.
1 year ago
Yes, that's how you can really tell if the model is doing real thinking and not recombinating things. If it can correctly play a novel game, then it's doing more than that.
I wonder what the minimal amount of change qualifies as novel?
"Chess but white and black swap their knights" for example?
I wonder what would happen with a game that is mostly chess (or chess with truly minimal variations) but with all the names changed (pieces, moves, "check", etc, all changed). The algebraic notation is also replaced with something else so it cannot be pattern matched against the training data. Then you list the rules (which are mostly the same as chess).
None of these changes are explained to the LLM, so if it can tell it's still chess, it must deduce this on its own.
Would any LLM be able to play at a decent level?
Nice. Even the tiniest rule, I strongly suspect, would throw off pattern matching. “Every second move, swap the name of the piece you move to the last piece you moved.”
By that standard (and it is a good standard), none of these "AI" things are doing any thinking
musical goalposts, gotta love it.
These LLM's just exhibited agency.
Swallow your pride.
"Does it generalize past the training data" has been a pre-registered goalpost since before the attention transformer architecture came on the scene.
1 reply →
No LLM model is doing any thinking.
How do you define thinking?
Being fast at doing linear algebra computations. (Is there any other kind?!)
Making the OP feel threatened/emotionally attached/both enough to call the language model a rival / companion / peer instead of a tool.
1 reply →