Comment by johnmaguire
2 hours ago
I realized that one of my bigger issues with LLMs is actually that I worry they increase "information entropy" on average. Most tools help me reduce entropy - LLMs seem to increase it, on a global scale.
This is related to my observation that for thousands of years, written text has indicated a human author - this is no longer true, and I think this is going to be very difficult for us to wrap our human brains around fully.
Interesting take. Hadn't thought of it in terms of entropy, but it's true. Almost by definition as the training proces doesn't introduce anything novel beyond scraped inputs and a randomly initialized network. From there, the stochastic generation only adds randomness (and the prompt, of course).
Generally I think this is a legitemate issue, although:
> the training process doesn't introduce anything novel
This is not always the case. A compiler, linter, proof checker, tests, etc. can all lower entropy.