← Back to context

Comment by fooker

13 hours ago

LLMs specifically are fine with random bits flipped for the results to be 'creative'.

That's not exactly how LLM temperature works. :). Also that's on inference, not training. Presumably these would be used for training, the latency would be too high for inference.

  • It doesn't work like that, but it can.

    Latency would be fine for inference, this is low earth orbit, that is about 25ms optimistically. Well within what we expect from our current crop of non local LLMs.