Comment by d4rkp4ttern
4 hours ago
I like that it does not require following any particular "system" or discipline. But having to use a non-local/proprietary memory layer is not ideal.
My own fully-local, minimalistic take on this problem of "session continuation without compaction" is to rely on the session JSONL files directly rather than create separate "memory" artifacts, and seamlessly index them to enable fast full-text search. This is the idea behind the "aichat" command-group + plugin I just added to my claude-code-tools [1] repo. You can quit your Claude-Code/Codex-CLI session S and type
aichat resume <id-of-session-S-you-just-quit>
It launches a TUI, offering a few ways to continue your work:
- blind trim - clones the session, truncates large tool calls/results and older assistant messages, which can clear up as much as 50% of context depending of course on what's going on; this is a quick hack to continue your work a bit longer
- smart trim - similar but uses headless agent to decide what to truncate
- rollover: the one I use most frequently; it creates a new session S1 (which can optionally be a different CLI agent, allowing cross-agent work continuation), and injects back-pointers to the parent session JSONL file of S, the parent's parent , and so on (what I call session lineage) , into the first user message, and the user can then prompt the agent to use a sub-agent to extract arbitrary context from the ancestor sessions to continue the work.
[1] https://github.com/pchalasani/claude-code-tools?tab=readme-o...
No comments yet
Contribute on Hacker News ↗