← Back to context

Comment by zozbot234

16 hours ago

You can run multiple inferences in parallel on the same set of weights, that's what batching is. Given enough parallelization it can be almost entirely compute-limited, at least for small context (max ~10GB per request apparently, but that's for 1M tokens!)

For offline work that's fine I guess, but batched or not <1tks is largely unusable for most usage cases

  • I just think this potential workflow needs to be tested so that we know if anything breaks or makes it infeasible. Ultimately it would be slow when running any single agent, but you might be working with a huge amount of them in parallel. I view this as potentially a great way of repurposing low-RAM hardware with this specific model.

Yes I think what this demonstrates that folks are missing is that now optimization for specific scenarios is quite possible.