Comment by markisus
2 months ago
The r1 somehow knew at an early stage that the message was HELLO but it couldn’t figure out the reason. Even at the end, its last “thought” insists that there is an encoding mistake somewhere. However the final message is correct. I wonder how well it would do for a nonstandard message. Any sufficiently long English message would fall to statistical analysis and I wonder if the LLMs would think to write a little Python script to do the job.
Wow, that's interesting! I wonder if this reproduces with a different message, or if it was a lucky guess.
I looked at how the strings tokenize and they do appear to conserve enough information that it could be decoded in theory.
> or if it was a lucky guess
It’s like guessing 1/2 or 2/3 on a math test. The test authors pick nice numbers, and programmers like ”hello”. If the way to encode the secret message resembles other encodings, it’s probably that the pattern matching monster picked it up and is struggling to autocomplete (ie backwards rationalize) a reason why.
I did some experimentation today. I wouldn't expect AI to solve it using only their own reasoning, but I've had a decent hit rate of getting AI to solve them when they have access to a Python interpreter. Here's Gemini Flash 2 solving one (albeit it lost the spaces) in a single prompt and about 7 seconds!
https://bsky.app/profile/paulbutler.org/post/3lhzhroogws2g