Comment by kossTKR

5 days ago

Really? So the system recognises someone asked the same question and serves the same answer? And who on earth shares the exact same context?

I mean i get the idea but sounds so incredibly rare it would mean absolutely nothing optimisation wise.

Yes. It is not incredibly rare, it's incredibly common. A huge percentage of queries to retail LLMs are things like "hello" and "what can you do", with static system prompts that make the total context identical.

It's worth maybe a 3% reduction in GPU usage. So call it a half billion dollars a year or so, for a medium to large service.

  •     > It's worth maybe a 3% reduction in GPU usage. So call it a half billion dollars a year or so, for a medium to large service.
    

    So if 3% is 500M, then annual spend is ~16.6B. That is medium sized these days?

Even if that were the case you wouldn't be wrong. Adding caching and deduplication (and clever routing and sharding, and ...) on top of timesharing doesn't somehow make it not timesharing anymore. The core observation about the raw numbers still applies.