Not really, because humans can form long term memories from conversations, but LLM users aren’t finetuning models after every chat so the model remembers.
He's right, but most people don't have the resources, nor indeed the weights themselves, to keep training the models. But the weights are very much long term memory.
Long term memory in an LLM is its weights.
Not really, because humans can form long term memories from conversations, but LLM users aren’t finetuning models after every chat so the model remembers.
He's right, but most people don't have the resources, nor indeed the weights themselves, to keep training the models. But the weights are very much long term memory.
users aren’t finetuning models after every chat
Users can do that if they want, but it’s more effective and more efficient to do that after every billion chats, and I’m sure OpenAI does it.
2 replies →