Comment by bryanhogan

5 days ago

Why would you not use context files in form of .md? E.g. how the SpecKit project does it.

Memory features are useful for the same reason that a human would use a database instead of a large .md file: it's more efficient to query for something and get exactly what you want than it is to read through a large, ultimately less structured document.

That said, Claude now has a native memory feature as of the 2.0 release recently: https://docs.claude.com/en/docs/claude-code/memory so the parent's tool may be too late, unless it offers some kind of advantage over that. I don't know how to make that comparison, personally.

  • Claude’s memory function adds a note to the file(s) that it reads on startup. Whereas this tool pulls from a database of memories on-demand.

    • So hilariously, I hadn't actually read those docs yet, I just knew they added the feature. It seems like the docs may not be up to date, as when I read them in response to your reply here, I was like "wait, I thought it was more sophisticated than that!"

      The answer seems to be both yes and no: see their announcement on youtube yesterday: https://www.youtube.com/watch?v=Yct0MvNtdfU&t=181s

      It's still ultimately file-based, but it can create non-Claude.md files in a directory it treats more specially. So it's less sophisticated than I expected, but more sophisticated than the previous "add this to claude.md" feature they've had for a while.

      Thanks for the nudge to take the time to actually dig into the details :)

      10 replies →

  • The other point here, I wanted something more in line with LLMs natural language, something that can be queried more effeciently buy just using normal language, almost like the way we think normally, we first have a though and then we go through our memory archive.

  • It's had native memory in the form of per-directory CLAUDE.md files for a while though. Not just 2.0

I still do, but having this allows for strategies like memory decay for older information. It also allows for much more structured searching capabilities, instead of opening file which are less structured.

.md files work great for small projects. But they hit limits:

1. Size - 100KB context.md won't fit in the window 2. No search - Claude reads the whole file every time 3. Manual - You decide what to save, not Claude 4. Static - Doesn't evolve or learn

Recall fixes this: - Semantic search finds relevant memories only - Auto-captures context during conversations - Handles 10k+ memories, retrieves top 5 - Works across multiple projects

Real example: I have 2000 memories. That's 200KB in .md form. Recall retrieves 5 relevant ones = 2KB.

And of course, there's always the option to use both .md for docs, Recall for dynamic learning.

Does that help?

  • I'm not sure. You don't use a single context.md file, you use multiple and add them when relevant in context. AIs adjust these as you need, so they do "evolve". So what you try to achieve is already solved.

    These two videos on using Claude well explain what I mean:

    1. Claude Code best practices: https://youtu.be/gv0WHhKelSE

    2. Claude Code with Playwright MCP and subagents: https://youtu.be/xOO8Wt_i72s

    • Yeah that's a solid workflow and honestly simpler than what I built - I think Recall makes sense when you hit the scale where managing multiple .md files becomes tedious (like 50+ conversations across 10 projects), but you're right that for most people your approach works great and is way less complex.

  • Can't you get recency just from git blame? Editors already show you each source line's last-touch age, even in READMEs, and even though this can get obfuscated (by reformatters, file moves, etc.) it's still a decent indicator.