Comment by lexicality
13 hours ago
The entire point of LLMs is that they produce statistically average results, so of course you're going to have problems getting them to produce non-average code.
13 hours ago
The entire point of LLMs is that they produce statistically average results, so of course you're going to have problems getting them to produce non-average code.
This was true circa GPT2, less true after RLHF and not true at all after RLVR. It's trying to model the distribution of outputs most likely to solve the problem, not the average distribution.
they (are supposed to) produce average on average, and the output distribution is (supposed to be) conditioned on the context
Yeah but ultimately it's all just function approximation, which produces some kind of conditional average. There's no getting away from that, which is why it surprises me that we expect them to be good at science.
They'll probably get really good at model approximation, as there's a clear reward signal, but in places where that feedback loop is not possible/very difficult then we shouldn't expect them to do well.