Comment by ua709
1 day ago
I get your point, I just wonder how accurate it is. We basically never look at the output of the compiler, so I agree that tool allows one to operate at a higher level than assembly. But I always have to wade through the output from AI so I’m not sure I got to move to the next level of abstraction. But maybe that’s just me.
Are compilers deterministic?
I'm sure someone, somewhere, once wrote one that wasn't but in general, yes they are.
The ones I use certainly are. And with a bit of training you can reason and predict how they will respond to a given input with a large degree of accuracy without being familiar with how the particular compiler under question was implemented.
Not so with the AI tools. At least with the ones I use anyway.
Technically LLMs can be ran in deterministic mode as well, but I don't think that is enough. Even a deterministic LLM is too chaotic, small changes in prompts or the otherwise general context can result in vastly different outputs. This makes me think we need some other factor that is stronger than (or maybe orthogonal to) determinism. A notion of being well-behaved or some other non-chaotic term, so that slightly different inputs don't lead to vastly unexpected results.
Even that doesn't feel quite correct, because a compiler does seem quite chaotic. Forget a semi colon and an otherwise 99.99% code base results in a vastly different output. But it is still a very understandable output. Very predictable. So while treating both compilers and LLMs as functions that map massive input strings to massive output strings, there is some property I can't well define that compilers have that LLMs still lack (and the question is if they'll always lack it). I can't really define what it is, but it is something more than determinism.
Given the same compiler, I believe they would be the same between runs given the same inputs. I suppose that could not be true at the margins, but I would expect correctness out of whatever path it chose.
For all intents and purposes yeh. Its really about the variance in actual outcomes vs the expected. The variance is not much is it? With LLMs that absolutely isnt the case.