← Back to context

Comment by bandrami

7 days ago

Well, right, I see those reasoning stages in reasoning models with Ollama and if you ask it what its reasoning was after the fact what it says is different than what it said at the time.

I can't speak to your specific set up, but it sounds like you're halfway there if you can access the previous traces? All anyone can ask for is "show me the traces that led up to this point"; the "why did you do this" is a notational convenience for querying that data. If your set up isn't summarizing those traces correctly, then that sounds like a specific bug in the context or model quality, but the point is that the traces exist and are queryable in the first place, however you choose to do that.

(I am still primarily talking about agent traces, like the original OP, not internal reasoning blocks for a particular LLM call, though - which may or may not be available in context afterwards.)

In particular, asking "why" isn't a category error here, although there's only a meaningful answer if the model has access to the previous traces in its context, which is sometimes true and sometimes not.