← Back to context

Comment by kadushka

4 days ago

Long term memory in an LLM is its weights.

Not really, because humans can form long term memories from conversations, but LLM users aren’t finetuning models after every chat so the model remembers.

  • He's right, but most people don't have the resources, nor indeed the weights themselves, to keep training the models. But the weights are very much long term memory.

  • users aren’t finetuning models after every chat

    Users can do that if they want, but it’s more effective and more efficient to do that after every billion chats, and I’m sure OpenAI does it.

    • If you want the entire model to remember everything it talked about with every user, sure. But ideally, I would want the model to remember what I told it a few million tokens ago, but not what you told it (because to me, the model should look like my private copy that only talks to me).

      1 reply →