← Back to context

Comment by fooker

1 day ago

> Not a single LLM available as a SaaS is deterministic.

Lower the temperature parameter.

It's not enough. Ive done this and still often gotten different results for the same question.

So, how does one do it outside of APIs in the context we're discussing? In the UI or when invoking @grok in X?

How do we also turn off all the intermediate layers in between that we don't know about like "always rant about white genocide in South Africa" or "crash when user mentions David Meyer"?