← Back to context

Comment by NitpickLawyer

9 hours ago

> or a spammer where your incentives have probably increased.

Slight pushback on this. The web has been spammed with subpar tutorials for ages now. The kind of medium "articles" that are nothing more than "getting started" steps + slop that got popular circa 2017-2019 is imo worse than the listy-boldy-emojy-filled articles that the LLMs come up with. So nothing gained, nothing lost imo. You still have to learn how to skim and get signals quickly.

I'd actually argue that now it's easier to winnow the slop. I can point my cc running in a devcontainer to a "tutorial" or lib / git repo and say something like "implement this as an example covering x and y, success condition is this and that, I want it to work like this, etc.", and come back and see if it works. It's like a litmus test of a tutorial/approach/repo. Can my cc understand it? Then it'll be worth my time looking into it. If it can't, well, find a different one.

I think we're seeing the "low hanging fruit" of slop right now, and there's an overcorrection of attitude against "AI". But I also see that I get more and more workflows working for me, more or less tailored, more or less adapted for me and my uses. That's cool. And it's powered by the same underlying tech.

The thing is, what is the actual point of this approach? Is it for leaning? I strongly believe there’s no learning without inmersion and practice. Is it for automation? The whole idea of automation is to not think about the thing again unless there’s a catastrophic error, it’s not about babysitting a machine. Is it about judgment? Judgment is something you hone by experiencing stuff then deciding whether it’s bad or not. It’s not something you delegate lightly.

The problem isn't that AI slop is doing something new. Phishing, blogspam, time wasting PRs, website scraping, etc have all existed before.

The problem is that AI makes all of that far, far easier.

Even using tooling to filter articles doesn't scale as slop grows to be a larger and larger percentage of content, and it means I'm going to have to consider prompt injections and running arbitrary code. All of this is a race to the bottom of suck.

The difference is that the cost of slop has decreased by orders of magnitude. What happens when only 1 in 10,000 of those tutorials you can find is any good, from someone actually qualified to write it?

  • One instance of definite benefit of AI is AI summary web search. Searching for answers to simple questions and not having to cut though SEO slop is such an improvement

    • The summary is often incorrect in at least some subtle details, which is invisible to a lot of people who do not understand LLM limitations.

      Now, we can argue that a typical SEO-optimized garbage article is not better, but I feel like the trust score for them was lower on average from a typical person.

    • I don't think searching for answers to simple questions was a problem until Google nerfed their own search engine.

    • There was a time before SEO slop that web search was really valuable

      We're fighting slop with condensed slop

    • Hard disagree. AI summaries are useless for the same reason AI summaries from Google and DDG are useless: it's almost always missing the context. The AI page summaries typically take the form of "here's the type of message that the author of this page is trying to convey" instead of "here's what the page actually says". Just give me the fucking contents. If I wanted AI slop I'd ask my fucking doorknob.