Comment by rambojohnson

2 days ago

> generative AI catch on not by just imitating other instruments,

but generative AI didn’t catch on by "imitating instruments." It caught on by imitating artists, which streaming platforms and record labels then repackage and outsell you with. false analogy.

This argument won't get you anywhere because "imitating artists" and "outselling artists" aren't actually the same thing.

i.e. complaining about training on copyrighted material and getting it banned is not sufficient to prevent creating a model that can create music that outsells you. Because training isn't about copying the training material, it's just a way to find the Platonic latent space of music, and you can get there other ways.

https://en.wikipedia.org/wiki/Law_of_large_numbers

https://phillipi.github.io/prh/

  • you're dodging the point by retreating into silly abstractions. I’m talking about cultural and economic displacement of artists, not a pedantic debate about latent spaces. "Training isn’t copying" is the cynical AI shill statement that doesn’t address the fact that systems trained on artists are then packaged and monetized to outsell them. why is this part so complicated for you? or are you just being obnoxious...

    dropping wiki links and math jargon avoids the ethical / market reality here.

    • > "Training isn’t copying" is the cynical AI shill statement that doesn’t address the fact that systems trained on artists are then packaged and monetized to outsell them.

      No, that's the whole problem. The systems are capable of outselling the artist whether or not they're trained on the artist. So you can't prevent it by complaining about the training data.

> but generative AI didn’t catch on by "imitating instruments."

My bad. As the first part of my comment suggested, what I meant to say here was "imitating instruments and the performers thereof".

> which streaming platforms and record labels then repackage and outsell you with

But that's the thing: it doesn't seem very likely that they'd ever succeed at actually outselling very many actual musicians, for the same reason those cheap keyboards that can play pop songs at the press of a button don't actually replace any actual musicians: not just because the quality sucks compared to even amateur performers, but because even if the quality didn't suck, the end result is about as interesting to the audience as a karaoke backing track or musak playing in an elevator. If anyone can press a button to make some statistical average of popular music, then that's gonna get real boring real quick, while the actual musicians will be making actual, novel music. It's just like what happened to the “vaporwave” and “nightcore” genres: they got flooded with “new songs” that are just slowed down / sped up (respectively) versions of existing songs, and nobody bothered seeking out those songs unless they were really into vaporwave/nightcore for their own sake or they were trying to put together one of the umpteen bajillion “anime girl studying while listening to lo-fi beats” playlists out there.

That is:

> false analogy.

Then here's another “false” analogy for you: just like with synthesizers, just like with vaporwave/nightcore, just like with all sorts of other musical phenomena where all of a sudden people with no skill could very easily and cheaply make musical slop, this new AI-driven wave of slop will, too, consume itself until it's yet another layer of background noise against which the actual musicians distinguish themselves and push the boundaries of music. It's a wildfire burning away yet another underbrush of mediocrity and creative stagnation, and while it's absolutely terrifying and dangerous in the present, it paves the way for a healthier regrowth in the aftermath.