← Back to context

Comment by CGamesPlay

7 months ago

An interesting paper that was recently published that talks about a different approach: Human-like Episodic Memory for Infinite Context LLMs <https://arxiv.org/abs/2407.09450>

This wasn't focused on RAG, but there seems to be a lot of crossover to me. Using the LLM to make "episodes" is a similar problem to chunking, and letting the LLM decide the boundary might also yield good results.