Comment by Mtinie

22 minutes ago

In my experience that “blink of an eye” has turned out to be a single moment when the LLM misses a key point or begins to fixate on an incorrect focus. After that, it’s nearly impossible to recover and the model acts in noticeably divergent ways from the prior behavior.

That single point is where the model commits fully to the previous misunderstanding. Once it crosses that line, subsequent responses compound the error.