Comment by tyre

7 days ago

I wish they wouldn’t call it “AI slop” before acknowledging that most of the bugs are correct.

Let’s bring a bit of nuance between mindless drivel (e.g. LinkedIn influencing posts, spammed issues that are LLMs making mistakes) vs using LLMs to find/build useful things.

I think they are saying what you want them to say. In the past they got a bunch of AI slop and now they are getting a lot of legit bug reports. The implication being that the AI got better at finding (and writing reports of) real bugs.

If I read the sentence correctly they're saying that past reports were AI slop, but the state of the art has advanced and that current reports are valid. This matches trends I've seen on the projects I work on.

It can be correct and slop at the same time. The reporter could have reported it in a way that makes it clear a human reviewed and cared about the report.

Slop is a function of how the information is presented and how the tools are used. People don't care if you use LLMs if they don't tell you can use them, they care when you send them a bunch of bullshit with 5% of value buried inside it.

If you're reading something and you can tell an LLM wrote it, you should be upset. It means the author doesn't give a fuck.

  • No it can't. These aren't "Show HN" posts about new programs people have conjured with Claude. They're either vulnerabilities or they're not. There's no such thing as a "slop vulnerability". The people who exploit those vulnerabilities do not care how much earlier reporters "gave a fuck" about their report.

    This is in the linked story: they're seeing increased numbers of duplicate findings, meaning, whatever valid bugs showboating LLM-enabled Good Samaritans are finding, quiet LLM-enabled attackers are also finding.

    People doing software security are going to need to get over the LLM agent snootiness real quick. Everyone else can keep being snooty! But not here.

    • Everyone is free to be as snooty as they like. If a report is harder to read/understand/validate because the author just yolo'ed it with an LLM, that's on the report author, not on the maintainers.

      It's not okay to foist work onto other people because you don't think LLM slop is a problem. It is absolutely a problem, and no amount of apologizing and pontificating is going to change that.

      Grow up and own your work. Stop making excuses for other people. Help make the world better, not worse. It's obvious that LLMs can be useful for this purpose, so people should use them well and make the reports useful. Period.

      1 reply →