Comment by marcus_holmes
3 days ago
LLMs can definitely produce original other stuff: ask it to create an original poem and on an extremely specific niche subject and it will do so. You can specify the niche subject to the point where it is incredibly unlikely that there is a poem on that subject in its training data, and it will still produce an original poem on that subject [0]. The well-known "otter using wifi on a plane" series of images [1] is another example: this is not in the training data (well, it is now, because well-known, but you get the idea).
Is there something unique about code, that is different from language (or images), that would make it impossible for an LLM to produce original code? I don't believe so, but I'm willing to be convinced.
I think this switches the burden of proof: we know LLMs can produce original content in other contexts. Why would they not be able to create original code?
[0] Ever curious, I tested this assumption. I got Claude to write an original limerick about goats oiling their beards with olive oil, which was the first reasonable thing I could think of as a suitably niche subject. I googled the result and could not find anything close to it. I then asked it to produce another limerick on the same subject, and it produced a different limerick, so obviously not just repeating training data.
[1] https://www.oneusefulthing.org/p/the-recent-history-of-ai-in...
No, it transformed your prompt. Another person giving it the same prompt will get the same result when starting from the same state. f('your prompt here') is a transformation of your prompt based on hidden state.
This is also true of humans, see every debate on free will ever.
The trick, of course, is getting to the exact same starting state.