Comment by AndyNemmity
17 hours ago
I tend to agree with you, however compacting has gotten much worse.
So... it's tough. I think memory abstractions are generally a mistake, and generally not needed, however I also think that compacting has gotten so wrong recently that they are also required until Claude Code releases a version with improved compacting.
But I don't do memory abstraction like this at all. I use skills to manage plans, and the plans are the memory abstraction.
But that is more than memory. That is also about having a detailed set of things that must occur.
I’m interested to see your setup.
I think planning is a critical part of the process. I just built https://github.com/backnotprop/plannotator for a simple UX enhancement
Before planning mode I used to write plans to a folder with descriptive file names. A simple ls was a nice memory refresher for the agent.
I understand the use case for plannotator. I understand why you did it that way.
I am working alone. So I am instead having plans automatically update. Same conception, but without a human in the mix.
But I am utilizing skills heavily here. I also have a python script which manages how the LLM calls the plans so it's all deterministic. It happens the same way every time.
That's my big push right now. Every single thing I do, I try to make as much of it as deterministic as possible.
Would you share an overview of how it works? Sounds interesting
1 reply →