Comment by sixQuarks

4 days ago

But context windows for LLMs include all the “long term memory” things you’re excluding from humans

Long term memory in an LLM is its weights.

  • Not really, because humans can form long term memories from conversations, but LLM users aren’t finetuning models after every chat so the model remembers.

    • He's right, but most people don't have the resources, nor indeed the weights themselves, to keep training the models. But the weights are very much long term memory.

    • users aren’t finetuning models after every chat

      Users can do that if they want, but it’s more effective and more efficient to do that after every billion chats, and I’m sure OpenAI does it.

      2 replies →