Comment by maxloh

5 days ago

> the LLM giving different answers depending on the input.

LLMs are actually designed to have some randomness in their responses.

To make the answer reproducible, set the temperature to O (eliminating randomness) and provide a static seed (ensuring consistent results) in the LLM's configuration.

The influence of the (pseudo-)random number generator is called "temperature" in most models.

Setting it to 0 in theory eliminates all randomness, and instead of choosing one from a list of next words that may be predicted, always only the MOST PROBABLY word would be chosen.

However, in practice, setting the temperature to 0 in most GUIs does not actually set the temperature to 0, but to a "very small" value ("epsilon"), the reason being to avoid a division by zero exception/crawsh in a mathematical formula. So don't be surprised if you cannot get rid of random behavior entirely.

  • > the reason being to avoid a division by zero exception/crawsh in a mathematical formula

    Why don't they just special-case it?

It's not necessary in most inference engines I've seen to set the temperature to 0—the randomness in the temperature is drawn from the seed, so a static seed will work for any temperature.