Comment by christinetyip
8 hours ago
This is fair, many memory projects out there boil down to better summaries or prompt glue without any clear way to measure impact.
One thing I’d clarify about what we’re building is that it’s not meant to be “the best memory for a single agent.”
The core idea is portability and sharing, not just persistence.
Concretely:
- you can give Codex access to memory created while working in Claude
- Claude Code can retrieve context from work done in other tools
- multiple agents can read/write the same memory instead of each carrying their own partial copy
- specific parts of context can be shared with teammates or collaborators
That’s the part that’s hard (or impossible) to do with markdown files or tool-local memory, and it’s also why we don’t frame this as “breaking the context limit.”
Measuring impact here is tricky, but the problem we’re solving shows up as fragmentation rather than forgetting: duplicated explanations, divergent state between agents, and lost context when switching tools or models.
If someone only uses a single agent in a single tool and already are using their customized CLAUDE.md, they probably don’t need this. The value shows up once you treat agents as interchangeable workers rather than a single long-running conversation.
> That’s the part that’s hard (or impossible) to do with markdown files or tool-local memory.
I'm confused because every single thing in that list is trivial? Why would Codex have trouble reading a markdown file Claude wrote or vice versa? Why would multiple agents need their own copy of the markdown file instead of just referring to it as needed? Why would it be hard to share specific files with teammates or collaborators?
Edit - I realize I could be more helpful if I actually shared how I manage project context:
CLAUDE.md or Agents.md is not the only place to store context for agents in a project, you can just store docs at any layer of granularity you want. What's worked best for me is to:
1. Have a standards doc(s) (you can point the agents to the same standards doc in their respective claude.md/agents.md)
2. Before coding, have the agent create implementation plans that get stored in to tickets (markdown files) for each chunk of work that would take about a context window length (estimated).
3. Work through the tickets and update them as completed. Easy to refer back to when needed.
4. If you want you can ask the agent to contribute to an overall dev log as well, but this gets long fast. Is useful for agents to refer to the last 50 lines or so to immediately get up to speed on "what just happened?", but so could git history.
5. Ultimately the code is going to be the real "memory" of the true state, so try to organize it in a way that's easy for agents to comb through (no 5000 lines files that agents have trouble trying to carefully jump around in to find what they need without eating up their entire context window immediately).
You’re right that reading the same markdown file is trivial, that’s not the hard part.
Where it stopped being trivial for us was once multiple agents were working at the same time. For example, one agent is deciding on an architecture while another is already generating code. A constraint changes mid-way. With a flat file, both agents can read it, but you’re relying on humans as the coordination layer: deciding which docs are authoritative, when plans are superseded, which tickets are still valid, and how context should be scoped for a given agent.
This gets harder once context is shared across tools or collaborators’ agents. You start running into questions like who can read vs. update which parts of context, how to share only relevant decisions, how agents discover what matters without scanning a growing pile of files, and how updates propagate without state drifting apart.
You can build conventions around this with files, and for many workflows that works well. But once multiple agents are updating state asynchronously, the complexity shifts from storage to coordination. That boundary - sharing and coordinating evolving context across many agents and tools — is what we’re focused on and what an external memory network can solve.
If you’ve found ways to push that boundary further with files alone, I’d genuinely be curious - this still feels like an open design space.
You're still not closing the gap between the problems you're naming and how your solution solves them?
> With a flat file, both agents can read it, but you’re relying on humans as the coordination layer: deciding which docs are authoritative, when plans are superseded, which tickets are still valid, and how context should be scoped for a given agent.
So the memory system also automates project management by removing "humans as the coordination layer"? From the OP the only details we got were
"What it does: (1) persists context between sessions (2) semantic & temportal search (not just string grep)"
Which are fine, but neither it nor you explain how it can solve any of these broader problems you bring up:
"deciding which docs are authoritative, when plans are superseded, which tickets are still valid, and how context should be scoped for a given agent, questions like who can read vs. update which parts of context, how to share only relevant decisions, how agents discover what matters without scanning a growing pile of files, and how updates propagate without state drifting apart."
You're claiming that semantic and temporal search has solved all of this for free? This project was presented as a memory solution and now it seems like you're saying its actually an agent orchestration framework, but the gap between what you're claiming your system can achieve and how you claim it works seems vast.