← Back to context

Comment by inciampati

9 hours ago

o1 appears to not be able to see it's own reasoning traces. Or it's own context is potentially being summarized to deal with the cost of giving access to all those chain of thought traces and the chat history. This breaks the computational expressivity or chain of thought, which supports universal (general) reasoning if you have reliable access to the things you've thought, and is threshold circuit (TC0) or bounded parallel pattern matcher when not.

My understanding is that o1's chain-of-thought tokens are in its own internal embedding, and anything human-readable the UI shows you is a translation of these CoT tokens into natural language.

  • Where is that documented? Fwiw, interactive use suggests they are not available to later invocations of the model. Any evidence this isn't the case?