Comment by zaptrem
17 hours ago
I train music generation models. They are very trivial to detect. In fact, detecting them then training them to evade detection by the detection model is a big part of training them! But the detectors win instantly without some hardcore regularization. Simply turn that off and you've instantly got a perfect classifier.
This isn't like text classification, the signal many orders of magnitude higher bitrate and so many more corners need to be cut. It's likely going to be nearly impossible or at least not remotely worth it to generate an audio signal that is truly undetectable in the foreseeable future.
We are talking about entirely different things.
You are right, the output of a model that generates music directly is, for now, easy to categorize as AI.
What this big flux of AI generated music online isn't really that. It'a a tiny bit autogenerated stuff and a whole lot of automatically remixed stuff. The reason it can not be easily classified as AI is because quite a bit of human produced music is also that, and you'd just shut out real users.
> They are very trivial to detect.
Today. Trying to detect AI is like extracting water from puddles in a lake that is quickly drying up. What is the point in the short term if it's impractical in the long term? It will catch some low-hanging fruit in the best case, and will find false positives in the worst.
My point is you should consider creating truly undetectable audio end to end with AI to be effectively impossible for the foreseeable future (i.e., I would bet money it is still trivially detectable five years from now). It won't be detectable to humans, though, only models.
in the broad strokes of ai generated, i wouldnt be so sure.
if the ai picked a bunch of samples and combined them together and mastered using an mcp to a DAW, how is that particularly distinguishable vs a person doing the same thing badly?
i can see how the llm generation pictures of spectrograms is essy to spot, but much less so with tool following.
even worse of you using a vla to have it actually play the guitar and use the recording as a sample.
theres some time and setup to make it happen sure, but somebody put that all in a studio and expose an mcp
1 reply →
why would you admit so openly to being part of the problem?
Why not? Even now it's still common to see people here openly admit to working at Meta. Making AI music less detectable is comparatively benign.