← Back to context

Comment by nottorp

15 hours ago

So we'll be going back to publishers as curators. Good for the publishers, I guess.

It is a combination of publisher lock in and folks attempting wild new stuff that breaks out of what AI stuff typically produces.

Earlier this year with a lot of luck, the Canadian duo Angine de Poitrine suddenly got discovered because they are doing stuff that falls outside of conventional music styles.

They aren't unique in the experimental nature they are exploring but it has highlighted an hunger from audiences to find stuff outside of the median. Folks like Frank Zappa had to relentlessly advocate for themselves as they figured there was a middle ground between these two thing.

this seems like a pattern seen across industries when it comes to AI

even more consolidation and lock in

On a similar note I recently deleted a whole bunch of automated tests because if the AI is going to write most of the code then I should test it to make sure it's good! This won't work for all projects, but for my indie games it's a good idea.

  • > I recently deleted a whole bunch of automated tests because if the AI is going to write most of the code then I should test it to make sure it's good!

    ??

    You say you deleted the tests, because you "should test it"? The logic seems inconsistent.

    Sanity checking LLM-generated code with LLM-generated automated tests is low-cost and high-yield because LLMs are really good at writing tests.

    • I think LLMs are really bad at writing tests. In the good old days you invested in your test code to be structured and understandable. Now we all just say "test this thing you just generated".

      I shipped a really embarrassing off-by-one error recently because some polygon representations repeat their last vertex as a sentinel (WKT, KML do this). When I checked the "tests", there was a generated test that asserted that a square has 5 vertices.

      2 replies →

    • > ...because LLMs are really good at writing tests.

      No, they're absolutely shit at writing tests. Writing tests is mostly about risk and threat analysis, which LLMs can't do.

      (This is why LLMs write "tests" that check if inputs are equal to outputs or flip `==` to `!=`, etc.)

I think DJs with even a light catalog of their own original music will become some of the most important artists instead. Nobody has any interest in going back to the old system.

As an user I wouldn't mind as long as it's attributed and I can skip it.

Pisses me off on YouTube - it's really hard to find something genuine in the sea of the AI written, AI subbed, AI generated and AI published - it's a scourge not because it's there, but because the channels are lying about it AND because 99.99999% of what I encountered it's not worth the waste heat processing a "publish 100 catchy videos about current affairs".

Why? HN isn’t “curating” the wave of AI-written tech article slop. Unclear if they should, readers here love it!

Hard to believe these models won’t get better and better at producing music that humans want to listen to.

  • The problem has never been that AI music doesn't sound good.

    • If you're already listening to bland generic pop country slop, then the AI version isn't much worse, but that's not a great thing to shoot for.

      AI music I've heard universally sounds bland and robotic.

      1 reply →