← Back to context

Comment by andorellefsen

1 day ago

Well I have a very specific use case. Me an my team made like this shared memory for LLMs: So an llm makes an mcp call to our api with a general question. An ai agent on our side determines which tools to call (search, graph exploration, etc) to find memory "fragments" based on tags and semantic search. and then the ai agent determines if the content of the "memory fragment" is relevant to the initial question and then it returns it.