Comment by rvnx
18 hours ago
I think you have to see this as a bunch of stateless requests, and this makes the problem way easier.
LLM requests that do not call tools do not need anything external by definition.
No central server, nothing, they can even survive without the context cache.
All you need is to load (and only once!) the read-only immutable model weights from a S3-like source on startup.
If it takes 4 servers to process a request, then you can group them 4 by 4, and then send a request to each group (sharding).
Copy-paste the exact same-setup XXX times and there you have your highly-parallelizable service (until you run out of money).
It's very doable, any serious SRE can find a way setup "larger than one card" models like Kimi or DeepSeek (unquantized) if they have a tightly-coupled HPC (or a pair of very very beefy servers).
If you run out of servers, then again a money problem, but not an architectural problem (and modern datacenters are already scalable).
Take the best SRE, but no budget, and there is no solution.
So inference is the easy part.
Codex or Claude Code if it takes lot of time or have slow cold latency, it's considered very acceptable.
Some users would probably not even see the difference if a request takes 2 minutes versus 3 minutes.
The real difficult part is to have context caching and external tools, because now you are depending on services that might be lagging.
Executing code, browsing the web, all of that is tricky to scale because they are very unreliable (tends to timeout, requires large cache of web pages, circumventing captchas, etc).
These are traditional scaling problems, but they are more difficult because all these pieces are fragile and queues can snowball easily.
Yeah, and totally missed RAI part, billing, model deployment, security patches, rate-limiting, caching, dead GPUs, metrics, multiple regions, gov clouds, gdpr(or data locality issues), monitoring, alerting and god knows what else while at extreme loads.
GDPR doesn’t affect load, dead GPUs are no different than any software freeze, model is a file update, metrics are already scaling very well and even way way way bigger and they are very linear, security updates are hedged with gradual rollouts, canary, feature flags, etc.
From an ops perspective all of these things are already really well solved issues in a very scalable manner, because plenty of companies had to solve these issues before.
It’s even better here because you can throw millions in salaries to “steal” the insider info on how their production actually.
No doubt it is fast-paced but the complexity to go from 100k GPUs to 1M is much lower than from going from 1k to 10k GPUs.
All 3 big AI companies had the luxury that during the scaling phase they could do everything directly on production servers.
This is because customers were very very tolerant, and are still quite tolerant.
You can even set limits of requests to large users and shape the traffic.
Cloudflare in comparison, high-scale, low-latency, end users not tolerant at all to downtime, customers even less tolerant, clearly hostile actors that actively try to make your systems down, limited budget, a lot of different workloads, etc.
So, for LLM companies where you have to scale a single workload, largely from mostly free users, and where most paid customers can be throttled and nobody is going to complain because nobody knows what are the limits + a lot of tolerance to high-latency and even downtimes then you are very lucky.