Comment by saberience

6 hours ago

Do we really need another vibe-coded LLM context/memory startup?

Do the authors have any benchmarks or test to show that this genuinely improved outputs?

I have tried probably 10-20 other open source projects and closed source projects purporting to improve Claude Code with memory/context, and still to this date, nothing works better than simply keeping my own library of markdown files for each project specification, markdown files for decisions made etc, and then explicitly telling Claude Code to review x,y,z markdown files.

I would also suggest to the founders, don't found a startup based on improving context for Claude Code, why? Because this is the number 1 thing the Claude Code developers are working on too, and it's clearly getting better and better with every release.

So not only are you competing with like 20+ other startups and 20+ other open-source projects, you are competing with Anthropic too.

This. Exactly this. Even relatively well working tools (from my experience and for my project types) like Agent OS are no guarantee, that Claude will not go on a tangent, use the "memory files" the framework tells it to use.

And I agree with your sentiment, that this is a "business field" that will get eaten by the next generations of base models getting better.

I mostly agree with this, if the goal were “better persistent memory inside Claude Code,” that wouldn’t be very interesting.

For a single agent and a single tool, keeping project specs and decisions in markdown and explicitly pointing the model at them works well. We do that too.

What we’re focused on is a different boundary: memory that isn’t owned by a specific agent or tool.

Once you start switching between tools (Claude, Codex, Cursor, etc.), or running multiple agents in parallel, markdown stops being “the memory” and becomes a coordination mechanism you have to keep in sync manually. Context created in one place doesn’t naturally flow to another, and you end up re-establishing state rather than accumulating it.

That’s why we're not thinking about this as "improving Claude Code”. We’re interested in the layer above that: a shared, external memory that can be plugged into any other model and tools, that any agent can read from or write to, and that can be selectively shared with collaborators. Context created in Claude can be reused in Codex, Manus, Cursor, or other agents from collaborators - and vice versa.

If one already built and is using one agent in one tool and is happy with markdown, they probably don’t need this. The value shows up once agents are treated as interchangeable workers and context needs to move across tools and people without being re-explained each time.

  • If markdown in a git repository isn’t good enough for collaboration, then why would any plugged in abstraction be better?

    You imply you have a solution for current wholistic state. For this you would need a solution for context decay and relevant curation — with benchmarks that prove it is also more valuable than constant rediscovery (for quality and cost).

    That narrative becomes harsher once you pivot to “general purpose agents” because you’re then competing with every existing knowledge work platform. So you’ll shift into “unified context for all your KW platforms” - where presumably the agents already have access (Claude today can basically go scrape all knowledge from anywhere).

    So then it becomes an offering of “current state” in complex human processes and this is a concept I’m not sure any technology can capture; whether it’s across codebases (which for humans we settled on git) and especially not general working scenarios. And I guess this is where it becomes a unified multi-agent wholistic state capture. Ambitious and fun problem.

right. I stopped reading at "ENSUE_API_KEY | Required. Get one at [dashboard](link to startup showing this is an ad)"

First thought: why do I need an API key for what can be local markdown files. Make contents of CLAUDE.md be "Refer to ROBOTS.md" and you've got yourself a multi-model solution.

Main objection to corporate AI uptake is what are you gonna do with our data. The value prop over local markdown files here is not at all clear to even begin asking that question.