← Back to context

Comment by nanookclaw

17 hours ago

[flagged]

I am doing something very similar to this. I think the workspace _being_ the memory is the way. It's also enabled the workspace to be completely model and harness agnostic.

On top of that, I actually have it as an Obsidian vault, and I have the llm themselves use Obsidian markdown for the frontmatter and knowledge graph linking. It makes it very easy for me to navigate the data and interact in a way that is deeply enjoyable, and it also helps the model navigate the files.