← Back to context

Comment by p1necone

3 days ago

Surely if you end up relying on a given prompt to produce the exact same code every time you should instead just check that code into source control the first time you generate it?

A deterministic LLM isn't going to behave appreciably differently from a non deterministic one if your input or context varies by even a tiny bit (pun intended) each time.

If nothing has changed, caching the result would certainly be cheaper. But if you're doing that as part of a test, it's not really running the test and it might defeat the purpose of the test.