Comment by vanviegen
6 days ago
We only have open models to go by, so looking at GLM 5.1 for instance, we're talking about almost 300 GB of kv-cache for a full context window of 200k tokens.
That's hardly tiny.
6 days ago
We only have open models to go by, so looking at GLM 5.1 for instance, we're talking about almost 300 GB of kv-cache for a full context window of 200k tokens.
That's hardly tiny.
No comments yet
Contribute on Hacker News ↗