Comment by Supermancho
6 hours ago
> you give the same prompt to the same LLM, you will get back the exact same response.
Demonstrably incorrect. This is because the model selection, among other data, is not fixed for (I would say most) LLMs. They are constantly changing. I think you meant something more like an LLM with a fixed configuration. Maybe additional constraints, depending on the specific implementation.
Yes, by 'same LLM', I mean literally the same model with the same random seeds. You are correct, the big LLMs from providers like Anthropic and OpenAI do not meet this definition.