Comment by xnx

2 days ago

Yes. This is what I was trying to say. Saying "It’s worth noting that LLMs are non-deterministic" is wrong and should be changed in the blog post.

> Saying "It’s worth noting that LLMs are non-deterministic" is wrong and should be changed in the blog post.

Every person in this thread understood that Simon meant "Grok, ChatGPT, and other common LLM interfaces run with a temperature>0 by default, and thus non-deterministically produce different outputs for the same query".

Sure, he wrote a shorter version of that, and because of that y'all can split hairs on the details ("yes it's correct for how most people interact with LLMs and for grok, but _technically_ it's not correct").

The point of English blog posts is not to be a long wall of logical prepositions, it's to convey ideas and information. The current wording seems fine to me.

The point of what he was saying was to caution readers "you might not get this if you try to repro it", and that is 100% correct.

  • Still, the statement that LLMs are non-deterministic is incorrect and could mislead some people who simply aren't familiar with how they work.

    Better phrasing would be something like "It's worth noting that LLM products are typically operated in a manner that produces non-deterministic output for the user"

    • > It's worth noting that LLM products are typically operated in a manner that produces non-deterministic output for the user

      Or you could abbreviate this by saying “LLMs are non-deterministic.” Yes, it requires some shared context with the audience to interpret correctly, but so does every text.

    • Simon would be less engaging if he caveated every generalisation in that way. It’s one of the main reasons academic writing is often tedious to read.

You’re correct in batch size 1 (local is one), but not in production use case when multiple requests get batched together (and that’s how all the providers do this).

With batching matrix shapes/request position in them aren’t deterministic and this leads to non deterministic results, regardless of sampling temperature/seed.

  • Isn't that true only if the batches are different? If you run exactly the same batch, you're back to a deterministic result.

    If I had a black box api, just because you don't know how it's calculated doesn't mean that it's non-deterministic. It's the underlaying algorithm that determines that and a LLM is deterministic.

    • Providers never run same batches because they mix requests between different clients, otherwise GPUs are gonna be severely underutilized.

      It’s inherently non deterministic because it reflects the reality of having different requests coming to the servers at the same time. And I don’t believe there are any realistic workarounds if you want to keep costs reasonable.

      Edit: there might be workarounds if matmul algorithms will give stronger guarantees then they are today (invariance on rows/columns swap). Not an expert to say how feasible it is, especially in quantized scenario.

"Non-deterministic" in the sense that a dice roll is when you don't know every parameter with ultimate precision. On one hand I find insistence on the wrongness on the phrase a bit too OCD, on the other I must agree that a very simple re-phrasing like "appears {non-deterministic|random|unpredictable} to an outside observer" would've maybe even added value even for less technically-inclined folks, so yeah.