Comment by Wowfunhappy

1 month ago

I think you would have gotten more generic games. The AI was clearly attempting to find meaning in what the dog typed, and that drove what it made.

Now, if Anthropic let you adjust the temperature, then maybe you could have done it without the dog...

The AI cannot drive meaning from the dog's input because there's no useful information encoded in there. It's effectively a random string (if there's less randomness, it's just because it's a dog's paw physically pressing on a keyboard).

All the relevant information was in the initial prompt and the scaffolding. The dog was not even /dev/random, it was simply a trigger to "give it another go".

  • The shapes of clouds and positions of stars are essentially random, and yet humans derive meaning from both. I agree you could have gotten the same results via /dev/random, or probably by increasing the temperature on the model, but I suspect doing one of those things is important.

    • The LLM cannot derive meaning in a human sense.

      The shapes of clouds and positions of stars aren't completely random; there is useful information in them, to varying degrees (e.g. some clouds do look like, say, a rabbit, enough that a majority of people will agree). The mechanism at play here with the LLM is completely different; the connection between two dog-inputs and the resulting game barely exists, if at all. Maybe the only signal is "some input was entered, therefore the user wants a game".

      If you could have gotten the same result with any input, or with /dev/random, then effectively no useful information was encoded in the input. The initial prompt and the scaffolding do encode useful information, however, and are the ones doing the heavy lifting; the article admits as much.

      1 reply →