Comment by cowwoc2020
14 hours ago
Makes me wish that shortly before the server-side expiration, we could save the cache on the client-side, indefinitely.
But my understanding is that we're talking about ~60GB of data per session, so it sounds unrealistic to do...
Where are you getting 60GB from? It shouldn’t be that large.
But yes, would love to save context/cache such that it can be played back/referred to if needed.
/compact is a little black box that I just have to trust that is keeping the important bits.
The KV cache consists of activation vectors for every attention head at every layer of the model for every token, so it gets quite large. ChatGPT also estimates 60-100GB for full token context of an Opus-sized model:
https://chatgpt.com/share/69dc5030-268c-83e8-92c2-6cef962dc5...