Comment by diwank
17 days ago
Working on Memory Store: persistent, shared memory for all your AI agents.
The problem: if you use multiple AI tools (Claude, ChatGPT, Cursor, etc.), none of them know what the others know. You end up maintaining .md files, pasting context between chats, and re-explaining your project every time you start a new conversation. Power users spend more time briefing their agents than doing actual work.
Memory Store is an MCP server that ingests context from your workplace tools (Slack, email, calendar) and makes it available to any MCP-compatible agent. Make a decision in one tool, the others know. Project status changes, every agent is up to date.
We ran 35 in-depth user interviews and surveyed 90 people before writing a line of product code — 95% had already built workarounds for this problem (custom GPTs, claude.md templates, copy-paste workflows). The pain is real and people are already investing effort to solve it badly.
Early users are telling us things like one founder who tracked investor conversations through Memory Store and estimated talking to 4-5x more people because his agents could draft contextual replies without manual briefing. It helped close his round.
Live in beta now. Would love feedback from anyone who's felt this pain! :)
I implemented a process. - First snapshot. Generated day by day files based on last 2years commit, used Claude to enrich every commit message (why,how,where and output). Also create File.java.md file enriched commit message per file. - I have the history. Next I embedded all of this to Postgresql database. - Implemented an MCP to query anything. - Create whatchdogs to followup projects that I setup, git hook creates stub files per commit. - And then process enriches new stub files. then indexes it to Postgresql.
I did for all internal projects. And first rule of CLAUDE.md is ask MCP for any information. Now Claude knows what other related projects did, what should adopt for.