Comment by cpard
2 days ago
It seems that the 24h SLA is standard for batch inference among the vendors and I wonder how useful it can be when you have no visibility on when the job will be delivered.
I wonder why they do that and who is actually getting value out of these batch APIs.
Thanks for sharing your experience!
It’s like most batch processes, it’s not useful if you don’t know what the response will be and you’re iterating interactively. It for data pipelines, analytics workloads, etc, you can handle that delay because no one is waiting on the response.
I’m a developer working on a product that lets users upload content. This upload is not time sensitive. We pass the content through a review pipeline, where we did moderation and analysis, and some business-specific checks that the user uploaded relevant content. We’re migrating some of that to an LLM based approach because (in testing) the results are just as good, and tweaking a prompt is easier than updating code. We’ll probably use a batch API for this and accept that content can take 24 hours to be audited.
yeah I get that part of batch, but even with batch processing, you usually want to have some kind of sense of when the data will be done. Especially when downstream processes depend on that.
The other part that I think makes batch LLM inference unique, is that the results are not deterministic. That's where I think what the parent was saying about some of the data at least should be available earlier even if the rest will be available in 24h.
Think of it like you have a large queue of work to be done (eg summarize N decades of historical documents). There is little urgency to the outcome because the bolus is so large. You just want to maintain steady progress on the backlog where cost optimization is more important than timing.
yes, what you describe feels like a one off job that you want to run, which is big and also not time critical.
Here's an example:
If you are a TV broadcaster and you want to summarize and annotate the content generated in the past 12 hours you most probably need to have access to the summaries of the previous 12 hours too.
Now if you submit a batch job for the first 12 hours of content, you might end up in a situation where you want to process the next batch but the previous one is not delivered yet.
And imo that's fine as long as you somehow know that it will take more than 12h to complete but it might be delivered to you in 1h or in 23h.
That's the part of the these batch APIs that I find hard to understand how you use in a production environment outside of one off jobs.
> who is actually getting value out of these batch APIs
I used the batch API extensively for my side project, where I wanted to ingest a large amount of images, extract descriptions, and create tags for searching. After you get the right prompt, and the output is good, you can just use the Batch API for your pipeline. For any non-time-sensitive operations, it is excellent.
What you describe makes total sense. I think that the tricky part is the "non-time-sensitive operations", in an environment where even if you don't care to have results in minutes, you have pipelines that run regularly and there are dependencies on them.
Maybe I'm just thinking too much in data engineering terms here.
> you have no visibility on when the job will be delivered
You do have - within 24 hours. So don't submit requests you need in 10 hours.
Contrary to other comments it's likely not because of queue or general batch reasons. I think it is because that LLMs are unique in the sense that it requires lot of fixed nodes because of vRAM requirements and hence it is harder to autoscale. So likely the batch jobs are executed when they have free resources from interactive servers.
Yes, almost certainly in this case Google sees traffic die off when a data center is in the dark. Specifically, there is a diurnal cycle of traffic, and Google usually routes users to close-by resources. So, late at night, all those backends which were running hot doing low-latency replies to users in near-real-time can instead switch over to processing batches. When I built an idle cycle harvester at google, I thought most of hte free cycles would come from low-usage periods, but it turned out that some clusters were just massively underutilized and had free resources all 24 hours.
that makes total sense and what it entails is that interactive inference >>> batch inference in the market today in terms of demand.