Comment by malfist

3 hours ago

"safety considerations" don't matter. The main sticking point with LLMs is that it's a blatant theft of everyone's copyright all while letting the bosses threaten your job. Blatantly stealing to wealth transfer to the ultrawealthy.

I realized that one of my bigger issues with LLMs is actually that I worry they increase "information entropy" on average. Most tools help me reduce entropy - LLMs seem to increase it, on a global scale.

This is related to my observation that for thousands of years, written text has indicated a human author - this is no longer true, and I think this is going to be very difficult for us to wrap our human brains around fully.

  • Interesting take. Hadn't thought of it in terms of entropy, but it's true. Almost by definition as the training proces doesn't introduce anything novel beyond scraped inputs and a randomly initialized network. From there, the stochastic generation only adds randomness (and the prompt, of course).

    • Generally I think this is a legitemate issue, although:

      > the training process doesn't introduce anything novel

      This is not always the case. A compiler, linter, proof checker, tests, etc. can all lower entropy.

That might be the case from your position. But if you were a woman whose stalker was able to locate your photos with ease and generate deepfakes or emulate your voice to feed his obsession you might think differently. If you were worrying about your kids surviving tomorrow because an AI system might target their school for the next round it bombings then copyright infringement night not be your top concern.