← Back to context

Comment by jedberg

11 hours ago

That's not exactly how LLM temperature works. :). Also that's on inference, not training. Presumably these would be used for training, the latency would be too high for inference.

It doesn't work like that, but it can.

Latency would be fine for inference, this is low earth orbit, that is about 25ms optimistically. Well within what we expect from our current crop of non local LLMs.