Comment by hombre_fatal
6 hours ago
Also, when you hit compaction at 200k tokens, that was probably when things were just getting good. The plan was in its final stage. The context had the hard-fought nuances discovered in the final moment. Or the agent just discovered some tiny important details after a crazy 100k token deep dive or flailing death cycle.
Now you have to compact and you don’t know what will survive. And the built-in UI doesn’t give you good tools like deleting old messages to free up space.
I’ll appreciate the 1M token breathing room.
I've found compactation kills the whole thing. Important debug steps completely missing and the AI loops back round thinking it's found a solution when we've already done that step.
I find it useful to make Claude track the debugging session with a markdown file. It’s like a persistent memory for a long session over many context windows.
Or make a subagent do the debugging and let the main agent orchestrate it over many subagent sessions.
Yeah I use a markdown to put progress in. It gets kinda long and convoluted a manual intervention is required every so often. Works though.
For me, Claude was like that until about 2m ago. Now it rarely gets dumb after compaction like it did before.
oh, ive found that something about compaction has been dropping everything that might be useful. exact opposite experience
[dead]