Comment by troupo

2 days ago

> Barring rare(?) GPU race conditions, LLMs produce the same output given the same inputs.

Are these LLMs in the room with us?

Not a single LLM available as a SaaS is deterministic.

As for other models: I've only run ollama locally, and it, too, provided different answers for the same question five minutes apart

Edit/update: not a single LLM available as a SaaS's output is deterministic, especially when used from a UI. Pointing out that you could probably run a tightly controlled model in a tightly controlled environment to achieve deterministic output is very extremely irrelevant when describing output of grok in situations when the user has no control over it

The models themselves are mathematically deterministic. We add randomness during the sampling phase, which you can turn off when running the models locally.

The SaaS APIs are sometimes nondeterministic due to caching strategies and load balancing between experts on MoE models. However, if you took that model and executed it in single user environment, it could also be done deterministically.

  • > However, if you took that model and executed it in single user environment,

    Again, are those environments in the room with us?

    In the context of the article, is the model executed in such an environment? Do we even know anything about the environment, randomness, sampling and anything in between or have any control over it (see e.g https://news.ycombinator.com/item?id=44528930)?

> Not a single LLM available as a SaaS is deterministic.

Gemini Flash has deterministic outputs, assuming you're referring to temperature 0 (obviously). Gemini Pro seems to be deterministic within the same kernel (?) but is likely switching between a few different kernels back and forth, depending on the batch or some other internal grouping.

  • And it's the author of the original article running Gemkni Flash/GemmniPro through an API where he can control the temperature? can kernels be controlled by the user? Any of those can be controlled through the UI/apis where most of these LLMs are involved from?

    > but is likely switching between a few different kernels back and forth, depending on the batch or some other internal grouping.

    So you're literally saying it's non-deterministic

    • The only thing I'm saying is that there is a SaaS model that would give you the same output for the same input, over and over. You just seem to be arguing for the sake of arguing, especially considering that non-determinism is a red herring to begin with, and not a thing to care about for practical use (that's why providers usually don't bother with guaranteeing it). The only reason it was mentioned in the article is because the author is basically reverse engineering a particular model.

      1 reply →

> Not a single LLM available as a SaaS is deterministic.

Lower the temperature parameter.

  • It's not enough. Ive done this and still often gotten different results for the same question.

  • So, how does one do it outside of APIs in the context we're discussing? In the UI or when invoking @grok in X?

    How do we also turn off all the intermediate layers in between that we don't know about like "always rant about white genocide in South Africa" or "crash when user mentions David Meyer"?

Akchally... Strictly speaking and to the best of my understanding, LLMs are deterministic in the sense that a dice roll is deterministic; the randomness comes from insufficient knowledge about its internal state. But use a constant seed and run the model with the same sequence of questions, you will get the same answers. It's possible that the interactions with other users who use the model in parallel could influence the outcome, but given that the state-of-the-art technique to provide memory and context is to re-submit the entirety of the current chat I'd doubt that. One hint that what I surmise is in fact true can be gleaned from those text-to-image generators that allow seeds to be set; you still don't get a 'linear', predictable (but hopefully a somewhat-sensible) relation between prompt to output, but each (seed, prompt) pair will always give the same sequence of images.