Comment by nippoo
19 hours ago
Truly! As someone who's worked with HPC and GPUs in a scientific research context, trying to get a service like this to work reliably is a different ballgame to your usual webapp stack...
19 hours ago
Truly! As someone who's worked with HPC and GPUs in a scientific research context, trying to get a service like this to work reliably is a different ballgame to your usual webapp stack...
But… imagine that same scientific research but you have an unlimited budget. I’d imagine that helps.
Some of the comments here mention their monthly spend, and it’s eye watering.
It would be "unlimited budget" if they were a monopoly, but they're in a bidding war with three other "unlimited" budget AI companies, over a resource no one expected to be scarce. There's simply not enough supply to meet demand, no matter how much money you have
I think you have to see this as a bunch of stateless requests, and this makes the problem way easier.
It's very doable, any serious SRE can find a way setup "larger than one card" models like Kimi or DeepSeek (unquantized) if they have a tightly-coupled HPC (or a pair of very very beefy servers).
If you run out of servers, then again a money problem, but not an architectural problem (and modern datacenters are already scalable).
Take the best SRE, but no budget, and there is no solution.
So inference is the easy part.
Codex or Claude Code if it takes lot of time or have slow cold latency, it's considered very acceptable.
Some users would probably not even see the difference if a request takes 2 minutes versus 3 minutes.
The real difficult part is to have context caching and external tools, because now you are depending on services that might be lagging.
These are traditional scaling problems, but they are more difficult because all these pieces are fragile and queues can snowball easily.
Yeah, and totally missed RAI part, billing, model deployment, security patches, rate-limiting, caching, dead GPUs, metrics, multiple regions, gov clouds, gdpr(or data locality issues), monitoring, alerting and god knows what else while at extreme loads.
GDPR doesn’t affect load, dead GPUs are no different than any software freeze, model is a file update, metrics are already scaling very well and even way way way bigger and they are very linear, security updates are hedged with gradual rollouts, canary, feature flags, etc.
From an ops perspective all of these things are already really well solved issues in a very scalable manner, because plenty of companies had to solve these issues before.
It’s even better here because you can throw millions in salaries to “steal” the insider info on how their production actually.
No doubt it is fast-paced but the complexity to go from 100k GPUs to 1M is much lower than from going from 1k to 10k GPUs.
All 3 big AI companies had the luxury that during the scaling phase they could do everything directly on production servers.
This is because customers were very very tolerant, and are still quite tolerant.
You can even set limits of requests to large users and shape the traffic.
Cloudflare in comparison, high-scale, low-latency, end users not tolerant at all to downtime, customers even less tolerant, clearly hostile actors that actively try to make your systems down, limited budget, a lot of different workloads, etc.
So, for LLM companies where you have to scale a single workload, largely from mostly free users, and where most paid customers can be throttled and nobody is going to complain because nobody knows what are the limits + a lot of tolerance to high-latency and even downtimes then you are very lucky.
Can you speak a little more to this? I'm curious what kind of parameters one must consider/monitor and what kind of novel things could go wrong.
My guesses are:
hardware capacity constraints is going to be the big one
Effective caching is another, I bet if you start hitting cold caches the whole things going to degrade rapidly.
The ground is probably shifting pretty rapidly.
Power users are trying to get the most out of their subscriptions and so are hammering you as fast as they possibly can. See Ralph loops.
Harnesses are evolving pretty rapidly, as well as new alternatives harnesses. Makes the load patterns less predictable, harder to cache.
The demand is increasing both from more customers, but also from each user as they figure out more effective workflows.
Users are pretty sensitive to model quality changes. You probably want smart routing, but users want the best model all the time.
Models keep getting bigger and bigger.
On top of that they are probably hiring more onboarding more, system complexity and codebase complexity is growing.
Just ask Claude and some agents to fix it...