Comment by jayd16

19 hours ago

Beyond that, isn't it just going to make up a narrative to fit what's in the prompt and context?

I don't think there's any special introspection that can be done even from a mechanical sense, is there? That is to say, asking any other model or a human to read what was done and explain why would give you just an accounting that is just as fictional.

Not necessarily. The people saying that in this thread seem to be forgetting about the encrypted reasoning tokens. The why of a decision is often recorded in a part of the context window you can't see with modern models. If you ask a model, "why did you do that" it isn't necessarily going to make up a plausible answer - it can see the reasoning traces that led up to that decision and just summarize them.