← Back to context

Comment by MetaWhirledPeas

19 hours ago

> They are very trivial to detect.

Today. Trying to detect AI is like extracting water from puddles in a lake that is quickly drying up. What is the point in the short term if it's impractical in the long term? It will catch some low-hanging fruit in the best case, and will find false positives in the worst.

My point is you should consider creating truly undetectable audio end to end with AI to be effectively impossible for the foreseeable future (i.e., I would bet money it is still trivially detectable five years from now). It won't be detectable to humans, though, only models.

  • in the broad strokes of ai generated, i wouldnt be so sure.

    if the ai picked a bunch of samples and combined them together and mastered using an mcp to a DAW, how is that particularly distinguishable vs a person doing the same thing badly?

    i can see how the llm generation pictures of spectrograms is essy to spot, but much less so with tool following.

    even worse of you using a vla to have it actually play the guitar and use the recording as a sample.

    theres some time and setup to make it happen sure, but somebody put that all in a studio and expose an mcp