Comment by mschild

3 days ago

> One of the things we learned very quickly was that having generated source code in the same repository as actual source code was not sustainable.

Keeping a repository with the prompts, or other commands separate is fine, but not committing the generated code at all I find questionable at best.

If you can 100% reproduce the same generated code from the same prompts, even 5 years later, given the same versions and everything then I'd say "Sure, go ahead and don't saved the generated code, we can always regenerate it". As someone who spent some time in frontend development, we've been doing it like that for a long time with (MB+) generated code, keeping it in scm just isn't feasible long-term.

But given this is about LLMs, which people tend to run with temperature>0, this is unlikely to be true, so then I'd really urge anyone to actually store the results (somewhere, maybe not in scm specifically) as otherwise you won't have any idea about what the code was in the future.

  • > If you can 100% reproduce the same generated code from the same prompts, even 5 years later

    Reproducible builds with deterministic stacks and local compilers are far from solved. Throwing in LLM randomness just makes for a spicier environment to not commit the generated code.

  • Temperature > 0 isn’t a problem as long as you can specify/save the random seed and everything else is deterministic. Of course, “as long as” is still a tall order here.

    • My understanding is that the implementation of modern hosted LLMs is nondeterministic even with known seed because the generated results are sensitive to a number of other factors including, but not limited to, other prompts running in the same batch.

      1 reply →

    • Have any of the major hosted LLMs ever shared the temperature parameters that prompts were generated with?

I didn't read it as that - If I understood correctly, generated code must be quarantined very tightly. And inevitably you need to edit/override generated code and the manner by which you alter it must go through some kind of process so the alteration is auditable and can again be clearly distinguished from generated code.

Tbh this all sounds very familiar and like classic data management/admin systems for regular businesses. The only difference is that the data is code and the admins are the engineers themselves so the temptation to "just" change things in place is too great. But I suspect it doesn't scale and is hard to manage etc.

I feel like using a compiler is in a sense a code generator where you don't commit the actual output

  • > I feel like using a compiler is in a sense a code generator where you don't commit the actual output

    Compilers are deterministic. Given the same input you always get the same output so there's no reason to store the output. If you don't get the same output we call it a compiler bug!

    LLMs do not work this way.

    (Aside: Am I the only one who feels that the entire AI industry is predicated on replacing only development positions? we're looking at, what, 100bn invested, with almost no reduce in customer's operating costs other than if the customer has developers).

  • Sure, but compilers are arguably idempotent. Same code input, same output. LLMs certainly are not.

    • Yeah I fully agree (in the other comments here, no less) I just think "I don't commit my code" to be a specific mindset of what code actually is