Comment by mkfs
5 days ago
The diffusion-based art generators seem pretty evil. Trained (without permission) on artists' works, devalues said works (letting prompt jockeys LARP as artists), and can then be deployed to directly compete with said artists to threaten their livelihoods.
These systems (LLMs, diffusion) yield imitative results just powerful enough to eventually threaten the jobs of most non-manual laborers, while simultaneously being not powerful enough (in terms of capability to reason, to predict, to simulate) to solve the hard problems AI was promised to solve, like accelerating cancer research.
To put it another way, in their present form, even with significant improvement, how many years of life expectancy can we expect these systems to add? My guess is zero. But I can already see a huge chunk of the graphic designers, the artists, the actors, and the programmers or other office workers being made redundant.
Making specific categories of work obsolete is not evil by any existing moral code I know. On top of that, history shows that humans are no less employed over the generations as we’ve automated more things. You entire comment is rooted in fear, uncertainty, and doubt. I have the opposite mindset. I love the fact that we have trained models on large corpuses of human culture. It’s beautiful and amazing. Nobody has the right to dictate how the culture they generate shall be consumed, not me, not you, and not Warhol, not Doctorow, not Lessig. Human created art always has been and will continue to be valuable. The fact that copyright is a really poor way to monetize art is not an argument that AI is evil. I support all my favorite creators on Patreon, not by buying copies of their work.