Comment by BrokrnAlgorithm
2 days ago
I'm a musician, but am also pretty amused by this anti ai wave.
There was recently a post referencing aphex twin and old school idm and electronic music stuff and i can't help bein reminded how every new tech kit got always demonized until some group of artists came along and made it there own. Even if its just creative prompting, or perhaps custom trained models, someday someone will come along and make a genuine artistic viable piece of work using ai.
I'd pay for some app which allows be to dump all my ableton files into, train some transformer on it, just to synthesize new stuff out of my unfinished body of work. It will happen and all lines will get blurred again, as usual.
Also a musician and I don't think it's that amusing. IMO this isn't an "AI can't be art" discussion. It's about the fact that AI can be used to extract value from other artists' work without consent, and then out-compete them on volume by flooding the marketplace.
And you create music without ever having heard music before? Or are you also extracting other artist’s work and using it as inspiration for what you do?
AI music is the same as AI code. It’s derived from real code, but it’s not just regurgitated wholesale. You still as a person with taste have to guide it and provide inputs.
Electronic music made it so you didn’t have to learn to play an instrument. Auto tune made it so you didn’t have to learn how to sing on key. There are many innovations in music over time that make it easier and less gatekeepy to make music.
We are just moving from making music as a rote activity similar to code, to making music like a composer in much the way that you can create software without writing code. It’s moving things up a level. It’s how the steady march of innovation happens.
It won’t work to put the genie back in the bottle, now it’s to find what you love about it and makes it worth it for you and to focus on that part. Banning the new types of art is only going to last as long as it takes for people to get over their initial shock of it and for good products to start being produced with it.
>And you create music without ever having heard music before? Or are you also extracting other artist’s work and using it as inspiration for what you do?
Personally, I don't buy this "AI models are learning just like we do." It's an appeal to ignorance. Just because we don't fully understand how a human brain learns, one can't claim it's the same as a statistical model of ordered tokens.
But even if it were true, I'm alright with drawing a line between AI learning and human learning. The law and social conventions are for humans. I want the ability to learn from others and produce original works that show influences. If this right is allowed to all humans, there is a chance one learn from and outperform me. That would suck for me, but I can accept it because it came from a universal human right I also enjoy. But an AI model doesn't have human rights. For models, the law and social conventions should still favor humans. The impact on the creative community and future creative endeavors should be balanced against the people who create and use the models.
I don't know how to do that with LLMs in a way that doesn't prevent the development of these amazing models. Maybe the government should distribute a portion of the revenue generated by the models amongst all citizens, to reflect how each model's value came from the written works of those citizens.
4 replies →
> Electronic music made it so you didn’t have to learn to play an instrument.
This is cliche. Most celebrated artists in the electronic music world can play several instruments, if not expertly, than at least with enough familiarity to understand the nuances of musical performance.
Electronic musicians are more akin to composers and probably have more in common with mathematicians and programmers in the way that they practice their craft, whereas musical performers probably have more in common with athletes in the way that they practice their craft.
6 replies →
> Electronic music made it so you didn’t have to learn to play an instrument. Auto tune made it so you didn’t have to learn how to sing on key.
Neither of those things are really true, though. They made it possible to make poor music without learning those things, I suppose, but not make good music.
> Banning the new types of art
Nobody is seriously talking about banning AI generated music. What you're seeing is a platform deciding that AI generated music isn't something that platform is into. There are a lot of different platforms out there.
4 replies →
Humans are humans, computer programs aren't. A computer program learning doesn't matter, and it's not comparable to human learning. I have no empathy, sympathy or any sort of allegiance to computer programs.
I would imagine the vast majority of other humans agree with me. I'm not just gonna betray humankind because some 1s and 0s "learned" how to write music. Who cares, it's silicon.
> And you create music without ever having heard music before? Or are you also extracting other artist’s work and using it as inspiration for what you do?
The volumes of production are really scales of magnitude of difference between a human producing music, and a computer.
With a script and generator 1 individual could oversaturate the whole marketplace overnight rendering it impossible for other individuals to be found let alone extract any value.
Also, I don't know if you've ever done music production for fun but you don't really just setup only a prompt. It takes a significant amount of time to actually produce something. Time setting up a DAW system and export an empty track, and submitting it. An empty track.
Let alone actually doing all the microoptimizations by ear and trial to produce any catchy tune. Meanwhile a statistical approach doesn't even have to understand what's it's doing, could as well be white noise for all it matters.
> AI music is the same as AI code. It’s derived from real code, but it’s not just regurgitated wholesale. You still as a person with taste have to guide it and provide inputs.
I guess the difference is proprietary code is mostly not used for training. It's going to be trained on code in the public. It's the inverse for music, where it's being trained on commercial work, not work that has been licensed freely.
1 reply →
> Or are you also extracting other artist’s work and using it as inspiration for what you do?
Yes, when I make music, I am taking inspiration from all of the other artists I've listened to and using that in my music. If someone listens to my music, they are getting some value from my contribution, but also indirectly from the musicians that inspired me.
The difference between that and AI is that I am a human being who deserves to live a life of dignity and artistic expression in a world that supports that while AI-generated music is the product of a mindless automaton that enriches billionaires who are actively building a world that makes it harder to live a life of stability, comfort, and dignity.
These are not the same thing any more than fucking a fleshlight is the same as being in a romantic relationship. The physical act may appear roughly the same, but the human experience, meaning behind it, and societal externalities are certainly not.
11 replies →
> AI music is the same as AI code. It’s derived from real code, but it’s not just regurgitated wholesale. You still as a person with taste have to guide it and provide inputs.
Not necessarily apples-to-apples here. Full songs generated from AI prompts don't crash like a computer program would. You could simply upload the garbage to Spotify and reap the rewards until it got removed (if it even does).
8 replies →
I think the analogy here is with Grok generating images of (real) people wearing bikini. It could always be done in Photoshop before (and with hand-made photo montages before that), but it's now accessible at scale to people with zero skill. That's when a quantitative change becomes qualitative.
Actually, to me this is the perfect argument to make AI music not have copyright.
Normally the copyright is owned by the creator. Algorithms can't own copyrights, so there is no copyright. There is already legal history on this.
> And you create music without ever having heard music before? Or are you also extracting other artist’s work and using it as inspiration for what you do?
For me, one key difference is that I can cite my stylistic influences and things I tried, while (to my knowledge) commercial musical generation models specifically avoid doing that, and most don't provide chord/lead sheets either -- I would find it genuinely sad to talk to a musician about their arrangement/composition choices, only to find they couldn't
4 replies →
> It won’t work to put the genie back in the bottle
It's not about putting the genie back in the bottle, it's about helping folks realize that the vague smell of farts in the air IS the genie--and this particular genie only grants costly monkey paw wishes that ultimately do more harm to the world than good.
> And you create music without ever having heard music before? Or are you also extracting other artist’s work and using it as inspiration for what you do?
This is an argument that the AI should be allowed to benefit, not the person prompting it.
> less gatekeepy to make music
Is "gatekeepy" how we're referring to skill now? "Man I'd like to make a top-quality cabinet for my kitchen, lame how those skilled carpenters are gatekeeping that shit smh"
1 reply →
> And you create music without ever having heard music before? Or are you also extracting other artist’s work and using it as inspiration for what you do?
But the parent poster is, presumably, human! Humans have the right to take inspiration like that from other humans (or machines)! Why do we seem so keen on granting machines the right to take from us? Are we not supposed to be their masters?
3 replies →
>We are just moving from making music as a rote activity similar to code
From this statement, I doubt you've written any music worth listening to, or any code that's not trivial.
Don't confuse music with muzak. What you get from an "AI" is muzak. It will never, ever have the same depth, warmth, or meaning as a human translating human emotions and experience into music and lyrics.
3 replies →
> Electronic music...
Your instrument is the computer and designing sound. You still have to have talent and musical ear to make this music.
[dead]
It's really only about the flooding the marketplace part, not about the extracting volume without their consent part. The current set of GenAI music models may involve training a black box model on a huge data set of scraped music, but would the net effect on artists' economic situations be any different if an alternate method led to the same result? Suppose some huge AI corporation hired a bunch of musicians, music theory Ph. D's, Grammy winning engineers, signal processing gurus, whatever, and hand-built a totally explainable model, from first principles, that required no external training data. So now they can crowd artists out of the marketplace that way instead. I don't think it would be much better.
but if no one is making Linkin Funk, can't I enjoy it just because it's made with AI?
https://www.youtube.com/watch?v=fH-BNwBV4EI
IMHO, it would be solved by just making AI "art" un-copyrightable. Fine, make "AI art" as much as you wish. Sell and buy it as much as you please if you find it to your taste. BUT, you can NOT participate in organizations that take royalties from radio stations, TVs, movies, records, etc. for publishing, performance, etc.
Wasn't it Picasso that said "good artist borrow, great artists steal?"
I've never heard an artist confident in their own ability complain about this because they're not threatened by other competent human artists knocking them off never mind an AI that's even worse at it.
AI not going to out-compete anyone on volume by flooding the marketplace because switching costs are effectively zero. Clever artists can probably find a way to grease controversy and marketing out of finding cases where they are knocked off, taking it as a compliment, and juicing it for marketing.
But I liked the Picasso quote when I was younger and earlier on in my journey as a musician because it reminded me to be humble and resist the desire to get possessive -- if what I was onto was really my own, people would like it and others could try to knock it off and fail. That is a lesson that has always served me very well.
I'm starting to think more and more in my older age that being 'great' isn't a good thing. I might actually prefer being good. We'll see how that thought plays out though; give me a couple more years
> then out-compete them on volume by flooding the marketplace
1 reply →
Wait until you hear about sampling...
“great artists steal”?
Trickle-down economics with the "trickle" reduced to zero.
Why are people mad? Don't they understand that you can't stop progress? Fssss... /s
[flagged]
Spotify has a history of intentionally boosting internally produced, royalty-free and/or AI music over actual artists.
https://harpers.org/archive/2025/01/the-ghosts-in-the-machin...
2 replies →
> You're just mad that people actually like AI music.
Yes, I am! I'm also mad that people like shitty over-produced pop, though (including me sometimes), so what can you do. Life is shit.
8 replies →
Curation is a real concern. 'Flooding the market' is bad for everyone, being seen is difficult as is. It's even harder in a slopstorm.
1 reply →
This is actually the definition of competition. You are just being drowned by AI music so no one can discover your music. Steam had the same issue years ago with asset flips drowning out the discoverability of actual titles and they implemented many curating tools to help resolve the issue. Acting like AI music isn't having a similar effort on genuine musicians is just playing dumb.
26 replies →
In order to find the stuff to listen to you have to... find it. If you had to wade through, say, 1 million AI generated books to find one that isn't, then ALL of your reading would be AI generated.
A sufficient proportion of junk can cause a market to fail, taking down "legitimate" or "quality" purveyors.
Yet your argument is deeply flawed too. Flooding the market with slop makes it much more difficult to discover genuine, quality, art from smaller creators.
ad hominem has no place on HN.
1 reply →
> It's about the fact that AI can be used to extract value from other artists' work without consent, and then out-compete them on volume by flooding the marketplace.
What do you think about The Prodigy?
I didn't even think about the analogy to sampling (and the prior controversy) but that is an even better analogy. Ultimately, the different between what's creative re-use and what's a ripoff is a matter of how skillfully it's done and there's a lot of controversy in the middle!
2 replies →
AI takes all of that old school idm and electronic music and repackages it without a human story to tell, ripping off actual musicians in the process. AI didn’t magically ‘make old IDM its own.’ It scraped decades of artists’ work, stripped out the context and intent, and reassembled the surface features. There’s no human arc, no lived constraint, no risk and no culture.
What’s being repackaged isn’t a new instrument, it’s other people’s careers. I’m not sure what part of that is supposed to be amusing.
I'm honestly not getting the human story thing when it comes to music and maybe art in general. I mean I get what it means, but I don't think it describes why people enjoy art.
To me, it seems more like people place their own meaning in art. A particular song might remind one individual of the good times they had in their teens, while the actual meaning of the song is completely different.
Bachs 5th symphony (or whatever) might be extremely annoying to someone because they had to listen to it every day at work.
And what exactly is the meaning of jazz fusion? I really like a good solo, but a lot of people hate it, they need to hear a voice. (though I don't particularly like the signature Suno or Udio solo..)
I found this ai track on Spotify that I unironically enjoyed. I listened to it every day while working on reviving an old passion project, which became its meaning to me. The tune, a long with its album with random disparate suno generations was taken down.
I'm not sure if I have a point here, but something is off with the story thing in art to me from a consumers point of view. Maybe from other artists as consumers point of view?
Your point echoes the "death of the author" concept in literature, where the work is independent of the creator, full stop. It's a useful concept up to a point, but if you really have no idea what it means to have a deep connection to music that is wrapped up in some idea of the creator as a human being, you should trust others when they say they do and it's important to them. For those of us with that value, AI slop is offensive, and to be clear, it has precedents in history with Muzak, early schlager music etc -- what they all share is a desire to use the power of music for non-artistic ends, which sucks from any number of viewpoints. If music has non-artistic utility, that doesn't justify a concerted effort to take away artist-made music from those who may not be paying attention.
1 reply →
except that what you’re describing is the CONSUMER SIDE of meaning, not the SOURCE of it.
yes, listeners project their own memories onto music, no one’s disputing that. but that doesn’t make the creator, context, intent, or labor irrelevant. treating music as interchangeable stimulus is how you end up defending systems that strip human work of attribution, risk, and livelihood while still feeding on the cultural residue artists created in the first place.
4 replies →
> Bachs 5th symphony (or whatever) might be extremely annoying to someone because they had to listen to it every day at work.
Or Beethoven's 9th. For different reasons...
1 reply →
I know a few EDM producers and the culture seems to consist of doing the most drugs of anyone you've ever met. Which is quite risky, true.
[flagged]
The issue is not so much an artist that will use it as a tool, even though there is much to say about it, it's the hundred of thousands of people with no interest in music whatsoever, that will flood the platforms in order to make a quick buck.
> it's the hundred of thousands of people with no interest in music whatsoever, that will flood the platforms in order to make a quick buck.
Whenever I look at popular artists on streaming platforms, I see 'remixes' where people just slowed down the particular original song and added reverb or some other silly effect to it. I don't think AI existing or not will change the behaviour of people trying to make a quick buck. If they aren't using AI, they'll use a different tool as they did before.
People who won’t invest anything and just want to make a quick buck won’t be successful with AI generated content/music.
You still need to invest significant time and effort to make it work.
Musicians who are being threatened by AI impersonating them, flooding the market with music like theirs, and otherwise actually harmed by this would disagree with you. Benn Jordan speaks at length about it in this video: https://www.youtube.com/watch?v=QVXfcIb3OKo
They will be successful in drowning out the artists. Not individually so, but collectively.
They are though by sheer volume. Finding anything half good will be a needle in a planet sized haystack of slop.
5 replies →
LOL you clearly haven’t checked the flows of million-plus-views ambient music bullshit videos on YouTube
2 replies →
How many engineers are using ai-generated software libraries at this point? This could be all over github, but the software mostly sucks (because the AI doesn’t do architecture and real engineering, that has to be input into it right now). Increasing the volume of production doesn’t necessarily lead to the abandonment of the “good stuff”. You still have to compose the music and write the lyrics, the AI is not sophisticated enough to competently do that right now
I don’t think this is an AI issue, but the amount of effort, the thought process and the story telling about the track they made.
Before generative AI, there were already a swarm of people who aimed at maximising the number of track they made within a short time, with abusing marketing. It is not wrong that they can pump up 100 tracks in a year, with a template and a specialised workflow and correct marketing techniques but… what is the story to these music? For many tracks, I only heard the story of:
> I am the most productive person and I can make most of the money because of that.
Quantity wise, for sure, they wins, but quality wise, I failed to imagine a more complex story than things above although they are good to hype the dance floor or a concert. These days, I mostly listen to music I have bought, or made by specific music communities because of their story behind their track despite not as perfect.
Same reasons why don’t I watch many movies since Ironman 3, most of the blockbusters follow the same winning formula rather than trying something new and in depth or unexpected, CGI and product placements all over the place instead of a good story.
AI just emphasised this problem even more since commercial “art” has been testing majorities’ newest lows.
There are differences between using a tool to create art or use it to spam.
> Even if its just creative prompting, or perhaps custom trained models, someday someone will come along and make a genuine artistic viable piece of work using ai.
We've now had this technology for 2 years. Show me one, just ONE track that is purely(!) made by AI you find honestly exciting. Not "commercially successful", mind you, something you, a musician, personally think is actually great. You are referencing Aphex Twin there, and I'm old enough to remember when I first heard "Digeridoo", so, you know, something where you just go "Wow, that's a banger". If you're DJing: something you would actually put on in a club and the crowd would go wild.
Let's cut the crap: there is none. All GenAI is good for is generating stupid memes, shitposting, ads, and generic background music. There is ZERO creative value in purely generative AI. Yes, there are tools leveraging AI models which can help musicians create tracks - entirely different thing. This is also not what Bandcamp is banning here. Most people will freely admit that AI tooling can be used creatively, like what De Staat did with the "Running backwards into the future" music video - that's all fine, really nobody is disputing that fact, although that "look" is now well established and people are mostly bored and annoyed by it, but that's just how it goes.
AI as tool is included inside almost every daw (or can be through VST) and there is no way bandcamp could enforce a strict "non AI has been used in the process" policy. I think it is sane to separate cases where a record is entirely generated by a single prompt vs AI used as instruments/tools.
> there is no way bandcamp could enforce a strict "non AI has been used in the process" policy
Good thing that's not what Bandcamp is doing, then. To spare you a click, here's the exact wording:
"Music and audio that is generated wholly or in substantial part by AI is not permitted on Bandcamp."
Precisely - every reverb is impulse response, lots of other effects are effectively some sort of convolution with neural networks that we otherwise call AI. Arpegiattors are AI and the random jumps between patterns in Ableton are a Markov Chain.
What does Bandcamp really mean? Perhaps sampling others voices and music is barred, not these mini-AIs that are everywhere ?
Please stop being intentionally obtuse. Convolution, arpeggiators, impulse responses are not at all comparable to output from generative AI / LLMs.
Well that's the issue. We're not seeing "artists" coming along and applying it to their years/decades of creative knowledge. We're seeing the equivalent of some cushy heir to a fortune come in with a drill and say "I can outdo these teams of diggers! We don't need diggers anymore!"
And on the surface the drill is better. But this heir is assuming that all diggers do is diplace dirt. Not thinking about where to dig, how to dig safely,, what to dig for, and where brute force is needed vs a subtle touch (because even in 2025, miners keep shovels with them). That's all going out the window for "hey I made a hole, mission accomplished!".
Instead of working with diggers to enhace their mining, they want to pretend they can dig themselves. That's why no one in the creative space is confident in this.
There are plenty of places to publish AI generated music. Why should a platform allow music it's users clearly don't want.
> demonized until some group of artists came along and made it there own
I'm pretty sure the people at Bandcamp agree with you and that's why they mention future "updates to the policy as the rapidly changing generative AI space develops".
I find it interesting that there's so much pushback against ai generated art and music while there seems to be very little for ai generated code.
Perhaps that's because there's an enormous difference between fine art and computer programs.
Also, there's quite a lot of pushback against AI-generated code, but also because unlike music, normal people have no interest in and aren't aware of the code.
They are obviously different things, but aren't the people who spent thousands of hours honing their coding and releasing their code spending just as much time and effort if not more than the people who made non-ai images and music?
1 reply →
I won't merge anything AI generated in any of my FOSS projects, unless I'm successfully deceived.
In the first place, I do not regard a copyright notice and license on AI generated code to be valid in my eyes, so on those grounds alone, I cannot use it any more than I could merge a piece of proprietary, leaked source code.
The copyright office agreed with you about the non-copyrightability of AI generated media so in that sense you can safely ignore copyright claims on anything AI-generated.
1 reply →
Musicians and artists are under pressure to make money, but they cant rush it
while programmers have to rush it these days or they lose their jobs. Programmers don't have much of say in their companies.
Devs are quite used to using others peoples work for free via packages, frameworks and entire operating systems and IDE’s. It’s just part of the culture.
Music has its history in IP, royalties, and most things need to be paid for in the creation of music or art itself.
It’s going to be much easier for devs to accept AI when remixing code is such a huge part of the culture already. The expectation in the arts is entirely different.
This doesn't make sense to me. I mean, the term "remix" literally comes from the music scene.
Artists are constantly getting inspiration from one another, referencing one another, performing together or having their works exhibited together...
While there are some big name artists who are famously protective of the concept of IP, those artists have made headlines exactly because when they litigate they seem so unreasonable compared to the bedroom musicians and pub bands and church choirs and school teachers and wedding DJs and millions of other artists and performers whose way of participating in "the culture" is much less tied to ownership.
Most code people interact with are creations shat out by soulless corporations, why would they care? Being honest here, the vast majority of people have their code experience dictated by less than a handful of companies; at their jobs they are told to use these tools or get file for welfare. The animosity has been baked into the industry for quite a while, it's only very very recently that the masses have been able to interact with open source code and even that is getting torn down by big tech.
Compare this to music where you are free to choose and listen to whatever you want, or stare at art that moves you. IF you don
At work most people are force to deal with code like SalesForce or MSFT garbage, not the same experience at all.
Why would people care about code coming from an industry that has been bleeding them dry and making their society worse for nearly 20+ years?
What???
Every thread on HN that touches on the topic has countless people talking about how LLM generated code is always bad, buggy and people that utilize them are inexperienced juniors that don't understand anything.
And they're not completely wrong. If you don't know what you're doing, you'll absolutely create dumster fires instead of software
Sure, I am one of the people who will say that. But where are the people calling for it to be banned? Where are the stores and websites that are banning AI generated software?
6 replies →
Positing that AI generated code is always bad and buggy is delusional.
I have dozens of little programs and websites that are AI generated and do their job perfectly.
I think a key factor there is that programmers (in the actual sense, rather than so-called “vibe coders”) are more likely on average than (current) artists and musicians to have intimate knowledge of how AI works and what AI can and can't do well — and consequently, the quality of their output is high enough that it's harder to notice the use of AI.
Eventually that'll change, as artists and musicians continue to experiment with AI and come up with novel uses for it, just as digital artists did with tablets and digital painting software, and just as musicians did with keyboards and DAWs.
AI music from suno sounds indistinguishable to non-ai generated music to me.
In terms of how well it works, the quality of AI music is far better than art or code. In art there are noticeble glitches like multiple fingers. For code, it can call non existent functions, not do what it is supposed to do, or have security issues or memory leaks. From what I can tell, there is no such deal breaker for AI music.
2 replies →
Companies sell products built on code, not the code itself. Code is a means to an end.
Music is art, code is engineering. "Hackers and painters"[1] was always wishful fluff, unfortunately.
When it comes to code, I don't think anyone cares how the sausage is made, and only very rarely do people care by whom. The only question is "does it work well?"
Art is totally different. Provenance is much more important - sometimes essential. David is a beautiful work, but you could 3d print or cast a replica of "David". No one would pretend that the copy is the same as the original though - even if they're indistinguishable to the untrained eye - because one was painstakingly hand sculpted and the others were cheaply produced. This sense of provenance is the property that NFTs were (unsuccessfully) trying to capture.
[1] https://www.paulgraham.com/hp.html
If someone painstakingly hand sculpted an exact replica of "David", does it make it art, or a forgery? Is hand written code to produce generative art not art?
It's difficult to pin down the line. Ultimately it's up to the individual to define them. "The relationship to art, and this kind of painting, to their work, varied with the person entirely."[1]
[1]: https://news.berkeley.edu/2025/03/31/berkeley-voices-transfo...
1 reply →
The pushback is motivated by the interests of the petty-bourgeois class, and those are a larger proportion of the former.
Imagine being a Marxist and not respecting the craft and labor required for art production. Couldn't be me.
I’m more hopeful that MIDI completion/in-filling models will be easier for musicians to control and use. But right now, the most popular tools are things like Suno, where you barely have any control and it spits out an entire, possibly mediocre song. It’s the same vein as ChatGPT image generation vs. Stable Diffusion, where you can do much more controllable inpaints with the latter.
>someday someone will come along and make a genuine artistic viable piece of work using ai
And in the mean time, AI will continue to clutter creative spaces and drown out actual hardworking artists, and people like you will co-opt what it means to be an artist by using tools that were trained on their work without consent.
[flagged]
> you sound like someone from the 1800's shouting about how photography should be banned and not allowed to crowd out hard working painters.
I'm saying that you shouldn't call photographs paintings because they aren't paintings. I don't particularly care if people make AI "music" or "art" and I don't particularly care if they consume it (people have been consuming awful media for the entire history of humanity, they aren't going to stop because I say so), but if you give me a ham sandwich and call it a hamburger I am going to be annoyed and tell you that it isn't a hamburger and to stop calling it that because you're misleading people who actually appreciate hamburgers.
AI "art" isn't art. I don't care whether you like it. It's like fractals or rock formations or birdsong - it may be aesthetically appealing to some people, but that isn't the definition of art.
2 replies →
> if you cant create something that competes with AI slop
Nothing can compete with AI slop when the ratio is 100000:1 AI slop vs real music.
Look at Google search results... they're not all AI slop yet, but they're 100000:1 content mills vs useful results.
Can artists compete with algorithms that push AI slop because it costs less to license?
1 reply →
It's like the reverse of the product that advertises itself as "AI driven." As if that's supposed to be a selling point. OK, it's AI driven, but is it good?
There may be short term emotional strings to pull. "AI driven!" or "AI free!"
But ultimately, no one will care if it's AI or not if it's good.
There is a difference between Richard D. James hand-training an LLM on foley sounds he recorded himself to put in his latest IDM track, and the script kiddie spamming out 50 AI-generated mixes per day to get that sweet ad revenue on Youtube.
No one is complaining about the first case, because they are outnumbered by the second 100,000 to 1. RDJ isn't gonna use suno.ai no matter how pro-LLM he is.
Note: this is for sake of argument, I am not aware of RDJ using LLMs in any shape or form.
You can already do this, but the platformised generative AI is sloppy by comparison and not that interesting.
https://github.com/acids-ircam/RAVE
The main differentiator I've noticed is: how much work is the tool doing, and how much work is the artist doing? And that's not to say that strictly more effort on the part of the artist is a good thing, it just has to be a notable amount to, IMHO, be an interesting thing.
This is the primary failure of all of the AI creative tooling, not even necessarily that it does too much, but that the effort of the artist doesn't correlate to good output. Sometimes you can get something usable in 1 or 2 prompts, and it almost feels like magic/cheating. Other times you spend tons of time going over prompts repeatedly trying to get it to do something, and are never successful.
Any other toolset I can become familiar and better equipped to use. AI-based tools are uniquely unpredictable and so I haven't really found any places beyond base concepting work where I'm comfortable making them a permanent component.
And more generally, to your nod that some day artists will use AI: I mean, it's not impossible. That being said, as an artist, I'm not comfortable chaining my output to anything as liquid and ever-changing and unreliable as anything currently out there. I don't want to put myself in a situation where my ability to create hinges on paying a digital landlord for access to a product that can change at any time. I got out of Adobe for the same reason: I was sick of having my workflows frustrated by arbitrary changes to the tooling I didn't ask for, while actual issues went unsolved for years.
Edit: I would also add the caveat that, the more work the tool does, the less room the artist has to actually be creative. That's my main beef with AI imagery: it literally all looks the same. I can clock AI stuff incredibly well because it has a lot of the same characteristics: things are too shiny is weirdly the biggest giveaway, I'm not sure why AI's think everything is wet at all times, but it's very consistent. It also over-populates scenes; more shit in the frame isn't necessarily a good thing that contributes to a work, and AI has no concept at all of negative space. And if a human artist has no space to be creative in the tool... well they're going to struggle pretty hard to have any kind of recognizable style.
There is an AI plugin for krita that lets you define regions, selection bounds, sub-prompts, control nodes, and lots more control over a given image generation model than standard Automattic or comfyUI workflows...down to 'put an arm wearing armor here' for example in my RPG NPC token writing.
It has full image generation mode, it has an animation mode, it has a live mode where you can draw a blob of images and it will refine it 2-50 steps only in that area.
So you are no longer doing per line stroke and saved brush settings, but you are still painting and composing an image yourself, down to a pixel by pixel rate. It's just that the tool it gives is WAY more compute intensive, the AI is sort of rendering a given part of a drawing as you specify as many times as you need.
How much of that workflow is just prompting a one-shot image, vs photoshopping +++ an image together until it meets your exact specifications?
No, the final image cannot be copyrighted under current US law in 2026, but for use in private settings like tabletop RPGs...my production values have gone way up and I didn't need to get a MFA degree in The old Masters drawing or open a drawing studio to get those images.
What's the plugin called?
> Sometimes you can get something usable in 1 or 2 prompts, and it almost feels like magic/cheating. Other times you spend tons of time going over prompts repeatedly trying to get it to do something, and are never successful.
That's normal for any kind of creative work. Some days it just happens quickly, other days you keep trying and trying and nothing works.
I spent some of the 90s and 00s making digital art. There was a lot of hostility to Photoshop then, and a lot of "That's not really art."
But I found that if I allowed myself to experiment, the output still had a unique personality and flavour which wasn't defined by the tool.
AI is the same.
The requirement for interesting art is producing something that's unique. AI makes that harder, but there's a lot of hand-made art - especially on fan sites like Deviant Art - which has some basic craft skill but scores very low on original imagination, unusual mood, or unique personality.
The reality is that most hand-made art is an unconscious mash-up of learned signifiers mediated by some kind of technique. AI-made art mechanises the mash-up, but it's still up to the creator to steer the process to somewhere interesting.
Some people are better at that than others, and more willing to dig deep into the medium and not take it at face value.
That's normal for any kind of creative work. Some days it just happens quickly, other days you keep trying and trying and nothing works.
Usually this means I have forgotten to eat, or that I need to take a step back and consider whatever I’m doing at a deeper level. Once I recognized that the “keep trying and trying and nothing works” days vanished for good.
> The reality is that most hand-made art is an unconscious mash-up of learned signifiers mediated by some kind of technique
Yeah, no. Competent artists are not generalizable as "unconscious", solely "mashing up" influences or input, or even working with "signifiers": many are exquisitely aware of their sources; many employ diverse and articulated methodologies for creation and elaboration; many enjoy working with the concrete elements of their medium with no concern for signification. Even "technique" does not have a uniform meaning across different fields and modes.
> That's normal for any kind of creative work. Some days it just happens quickly, other days you keep trying and trying and nothing works.
For me, the artist, sure. I've not yet had a day where Affinity Photo just doesn't have the juice, and I don't see the appeal. Photoshop, for all it's faults, doesn't have bad days.
That's the difference between the artist and the artists' tool. A difference so obvious I feel somewhat condescending pointing it out.
> I spent some of the 90s and 00s making digital art. There was a lot of hostility to Photoshop then, and a lot of "That's not really art." ... But I found that if I allowed myself to experiment, the output still had a unique personality and flavour which wasn't defined by the tool.
"People were wrong about a completely different thing" isn't the slam dunk counterpoint you think it is.
Also as someone else in that space at that time, I genuinely haven't the slightest idea what you mean about photoshop not being real art. I knew (and was an) artists at that time, we used Photoshop (of questionable legality but still) and I never heard this at all.
> The requirement for interesting art is producing something that's unique. AI makes that harder,
Understatement of the year.
> The reality is that most hand-made art is an unconscious mash-up of learned signifiers mediated by some kind of technique. AI-made art mechanises the mash-up, but it's still up to the creator to steer the process to somewhere interesting.
The difference is the lack of intent. A "person" mashes up what resonates with them (positively or negatively) and from those influences, and from the broader cultures they exist in, creates new and interesting things.
AI is fundamentally different. It is a mash up of an average mean of every influence in the entire world, which is why producing unique things is difficult. You're asking for exceptional things from an average machine (mathematical sense not quality sense.).
"greetings, fellow musicians" - genAI and quant guy
This is not a super well thought out position, but I've been leaning towards really disliking AI art in general (without having an opinion on any strong policy action yet).
First, art is, I think, one of the most enjoyable activities we have. One evidence is a lot of people forego higher salaries to choose an art job (although being a job carries additional responsibilities and some inconveniences compared to doing it as a hobby). It's a shame to see it diminished, when I believe we should be diverting efforts to automate other stuff.
Second, most AI art I've seen has been quite substandard compared to human art. We still don't know very well what human emotions are, the origin of sentience and qualia, etc.. But I think humans still lead here in having and probably understanding emotions. While for other tasks most implementation detail is irrelevant (e.g. in code, that it works tends to be most important, vs. minute choices in style), in art every detail is particularly relevant. Knowing this, it bothers me usually when I see this art that it doesn't carry the same knowledge of context and nuance a human would have.
Third, There's also the effect of making me question whether each piece of artwork was made by a human or AI, that didn't exist before. It does carry a bit of a magical feeling I think knowing a real person made every piece of artwork prior to 2018 or so (I think algorithmic art[1] is fine in this regard, because it tends to be more clearly algorithmic, and the involvement of the artist in coding is significant), that is now gone or at risk. Even the thought of imagining say their work day or what they had for lunch or talked to coworkers or friends is pleasant to me (at the risk of romanticizing it too much).
I suppose if AI art actually understood human nature, and specially the specific context of each art piece, better than us some of my arguments might be diminished. But the negatives so far seem to outweigh the positives, and I would like to e.g. give preference to content that doesn't use AI art.
(It is, admittedly, also the case that we lost a similar amount of craftsmanship when the industrial revolution happened, and in return we were able to support a larger population, and greater material conditions for most people. Every object now isn't carefully handcrafted. I think it's different because well, now material conditions are relatively abundant, and second there's no such insatiable, significant and irreplaceable demand for art as there were to common industrialized objects (take shoes for example), at least not to the same extent or vital significance. That is, the ability to have a shoe at all far outweighs it being carefully handcrafted, I believe; while experiencing a poorly made AI movie or artwork might be actually worse than none at all (or simply an older human made movie), and it also gets more cumbersome to evaluate for ourselves whether AI was employed or not. Also, while say shoes only last a limited time and need to be constantly produced, good artwork can last indefinitely (using digital storage), and even if you account for cultural change and relevance, can still last a really long time, motivating investing more into it.)
I'm quite sure that if we're still around in 500 or so years, we'll still be enjoying say Starry Night by Vincent van Gogh (probably as a digital reproduction). Current AI art will probably be largely discarded, so seems largely an unwise investment. Actually this kind of applies to code as well. It seems plausible Linux could still be used in 500 years from now (see how we still value finding Unix v4 50 years after), or at least of some interest. Those durable intellectual goods don't seem like wise places to invest anything but the best of us :) (at least in the cases it's not disposable)
The arguments above also don't seem to apply say in concept stages, or say for bland corporate diagrams that will be disposed of in 1 day, and which a huge quantity is needed. I think the main criteria I would evaluate is (1) Was it enjoyable to produce (for the artist(s))?; (2) Will it have a significant (artistic) impact on who is experiencing it?; (3) Will it last a long time?
[1] W.r.t. algorithmic art (and digital in general) (take bytebeat[2] for example), which is a field I really love, I am not any kind of absolutist about it. I know there tends to be extremely more degrees of freedom for human expression in a manual piece than in an algorithmic piece, so I see it more as a complement and not a substitute for more conventional art. I'd never give up ever hearing human musician player music for bytebeat, just bytebeat is a lovely experimental other dimension of expression. Writing a prompt seems a too few degrees of freedom and context, and too much of an uniform context that is less rich than humans can provide.
[2] https://dollchan.net/bytebeat/
Cf. Holly Herndon's album Proto.
This is something people spent a lot of time on, is trained lovingly on only their own stuff, and makes for some great music.
It's "AI" but in an almost unrecognizable way to us now: its not attached to some product, and its not about doing special prompting. It is definitely pop/electronic music, but it follows from a tradition of experimentation between what we can control and what we can't, which is here their bespoke stochastic program.
https://youtu.be/sc9OjL6Mjqo
It is not about how the computer or the model enables us, which is so silly. (As if art is simply about being able to do something or not!) Its about doing something with the pieces you have that only those pieces can do.
Holly Herndon's music is original and creative. Unlike most LLM-generated pastiche text, picture or music.
And since it's from 2019, it's not quite the same thing. I like it, unlike the current wave of unwanted LLM slop.
It's original. Of course if 1000 people were doing the same with minimal creative effort and passing it off as something else, that would ruin it.
Gotcha thanks!
1 reply →
Electronic music history is basically a graveyard of "this isn't real music" takes that aged badly
> its just creative prompting,
Sure, you just can't upload the resulting track directly on Bandcamp, but you're free to "creatively prompt" on SUNO all you want, they'll even host your "music".
It's also a matter of resources. People uploading gigabites of AI generated slop a day isn't really what Bandcamp is about.
Along the same lines, the anti-AI attitude among musicians today reminds me quite a bit of the anti-synthesizer attitude of the 60's and 70's, down to the same exact talking points: fears of “real” musicians being replaced by nerds pushing buttons on machines that can imitate those musicians.
I think the fears were understandable then, and are understandable now. I also think that, just as the fears around synthesizers didn't come to fruition, neither will the fears around AI come to fruition. Synthesizers didn't, and generative AI won't, replace musicians; rather, musicians did and will add these new technologies into their toolsets and use them to push music beyond what was previously understood to be possible. Synthesizers didn't catch on by just imitating other instruments, but by being understood and exploited as instruments in their own right; so will generative AI catch on not by just imitating other instruments, but by being understood and exploited as an instrument in its own right.
The core problem right now is that AI (even beyond just music) ain't being marketed as a means of augmenting one's creativity and skills, but as a means of replacing them. That'll always be misguided, both in the practical sense of producing worse outputs and in the philosophical sense of atrophying that same creativity and skills. AI doesn't have to produce slop, but it will inevitably produce slop when it's packaged and sold and marketed in a way that actively encourages slop — much like taking one of those cheap electric keyboards with built-in beats and songs and advertising it as able to replace a whole band. Yeah, it's cool that keyboards can play songs on their own and AI can generate songs on their own, but that output will always be subpar compared to what someone with even the slightest bit of creativity and skill can pull out of those exact same tools.
> generative AI catch on not by just imitating other instruments,
but generative AI didn’t catch on by "imitating instruments." It caught on by imitating artists, which streaming platforms and record labels then repackage and outsell you with. false analogy.
This argument won't get you anywhere because "imitating artists" and "outselling artists" aren't actually the same thing.
i.e. complaining about training on copyrighted material and getting it banned is not sufficient to prevent creating a model that can create music that outsells you. Because training isn't about copying the training material, it's just a way to find the Platonic latent space of music, and you can get there other ways.
https://en.wikipedia.org/wiki/Law_of_large_numbers
https://phillipi.github.io/prh/
2 replies →
> but generative AI didn’t catch on by "imitating instruments."
My bad. As the first part of my comment suggested, what I meant to say here was "imitating instruments and the performers thereof".
> which streaming platforms and record labels then repackage and outsell you with
But that's the thing: it doesn't seem very likely that they'd ever succeed at actually outselling very many actual musicians, for the same reason those cheap keyboards that can play pop songs at the press of a button don't actually replace any actual musicians: not just because the quality sucks compared to even amateur performers, but because even if the quality didn't suck, the end result is about as interesting to the audience as a karaoke backing track or musak playing in an elevator. If anyone can press a button to make some statistical average of popular music, then that's gonna get real boring real quick, while the actual musicians will be making actual, novel music. It's just like what happened to the “vaporwave” and “nightcore” genres: they got flooded with “new songs” that are just slowed down / sped up (respectively) versions of existing songs, and nobody bothered seeking out those songs unless they were really into vaporwave/nightcore for their own sake or they were trying to put together one of the umpteen bajillion “anime girl studying while listening to lo-fi beats” playlists out there.
That is:
> false analogy.
Then here's another “false” analogy for you: just like with synthesizers, just like with vaporwave/nightcore, just like with all sorts of other musical phenomena where all of a sudden people with no skill could very easily and cheaply make musical slop, this new AI-driven wave of slop will, too, consume itself until it's yet another layer of background noise against which the actual musicians distinguish themselves and push the boundaries of music. It's a wildfire burning away yet another underbrush of mediocrity and creative stagnation, and while it's absolutely terrifying and dangerous in the present, it paves the way for a healthier regrowth in the aftermath.
People don't listen to music because it sounds good, most music sounds downright awful, they listen to it for the human stories and connection. 99% of the music I listen
Nobody listens to techno - Eminem
AI needs to make music that sells. The same way Scorsese and Brando sell the Godfather. until it can do that literally nobody will care.
> I'm a musician, but am also pretty amused by this anti ai wave.
Let me guess: you're an amateur musician. Not that there's anything wrong with that, but it makes it much easier to be amused about this topic.
> There was recently a post referencing aphex twin and old school idm and electronic music stuff and i can't help bein reminded how every new tech kit got always demonized until some group of artists came along and made it there own.
What are you talking about? Which "tech kit" got demonized by whom? Of course, there were always controversies around techniques like sampling or whatever, or conservatives in the UK demonizing rave culture, but otherwise, I have no idea what you're referring to.
He's talking about the demonization of synthesizers, sampling, and digital audio workstations when each were respectively released.
There was no "demonization" about these things even remotely comparable to what we are witnessing now w.r.t. AI generated music (and I'm old enough to remember most of these things). Of course there was intense dislike from certain groups representing the "old-school" around new styles of music and new techniques. However, at the same time, you also had the "new-wave" which loved it and made it successful. For instance, a ton of people hated Disco music, at the same time, you had a ton of people who genuinely loved it. Same with practically any kind of electronic music. This simply does not exist with GenAI music. People listen to GenAI music because they either don't care, or don't know, not because they genuinely prefer it. There's absolutely nothing new about GenAI music that would make it exciting.
So you just want to be lazy and subsidize to the parrot machine the very essence of what it means to be creative. I am utterly baffled by this recurring comparison between past electronic tools, which actually have a pretty harsh learning curve to be mastered, and a software contraption that overtakes your creative agency. I see it everywhere, like comparing Midjourney to the shift to digital photography. What are y’all blokes on? How is it possible that even fine minds just lazily accept such a flawed parallel between two completely different technological paradigms?
nobody demonized afx or idm bro. autotune, yes. but that's different. damn autotune to hell
[dead]
[flagged]
The various lo-fi channels are also likely carrying heavily AI-generated music and it's actually kind of fine. The 'pieces' seem like undifferentiated background music of a certain mood, which is often what I'm looking for while I'm doing something else.
Previously, search was such a big problem. For instance, I'm not big on hip-hop and so on but I like songs like Worst Comes To Worst by Dilated Peoples. I've searched in all sorts of ways for other songs like that and come up with a handful of examples. Likewise, I want the vibe of Thick As A Brick by Jethro Tull during various parts. It's hard to find this kind of stuff.
But Suno.ai can generate that boom-bap vibe pretty easily and it's not the kind of thing where I'm going to put the same song on all the time like I do with the Dilated Peoples one, but it's good enough to listen to while I'm working.
[flagged]
>someday someone will come along and make a genuine artistic viable piece of work using ai
Always has been :)
https://www.youtube.com/watch?v=SpUj9zpOiP0
(And honorary mention)
https://www.youtube.com/watch?v=fYKAOPj_uts
Lobotomywave