Comment by jonahx

1 day ago

> 3. Learned behavior. It's ironic how even something like ChatGPT (it has hundreds of chats with me) barely knows anything about me & I constantly need to remind it of things.

I've wondered about this. Perhaps the concern is saved data will eventually overwhelm the context window? And so you must judicious in the "background knowledge" about yourself that gets remembered, and this problem is harder than it seems?

Btw, you can ask ChatGPT to "remember this". Ime the feature feels like it doesn't always work, but don't quote me on that.

Yes, but this should be trivially done with an internal `MEMORY` tool the LLM calls. I know that the context can't grow infinitely, but this shouldn't prevent filling the context with relevant info when discussing topic A (even a lazy RAG approach should work).

  • What you're describing is just RAG, and it doesn't work that well. (You need a search engine for RAG, and the ideal search engine is an LLM with infinite context. But the only way to scale LLM context is by using RAG. We have infinite recursion here.)