← Back to context

Comment by thomassmith65

9 days ago

One interesting quirk with Claude is that it has no idea its Chain-of-Thought is visible to users.

In one chat, it repeatedly accused me of lying about that.

It only conceded after I had it think of a number between one and a million, and successfully 'guessed' it.

Edit: 'wahnfrieden corrected me. I incorrectly posited that CoT was only included in the context window during the reasoning task and later left out entirely. Edited to remove potential misinformation.

  • No, the CoT is not simply extra context the models are specifically trained to use CoT and that includes treating it as unspoken thought

    • Huge thank you for correcting me. Do you have any good resources I could look at to learn how the previous CoT is included in the input tokens and treated differently?

      1 reply →

  • In which case the model couldn't possibly know that the number was correct.

    • I'm also confused by that, but it could just be the model being agreeable. I've seen multiple examples posted online though where it's fairly clear that the COT output is not included in subsequent turns. I don't believe Anthropic is public about it (could be wrong), but I know that the Qwen team specifically recommend against including COT tokensfrom previous inferences.

      2 replies →