← Back to context

Comment by rudhdb773b

10 hours ago

Not to single out your comment, but it feels like it's gotten to the point where HN could use a rule against complaining about AI generated content.

It seems like almost every discussion has at least someone complaining about "AI slop" in either the original post or the comments.

I disagree. I like to read articles and explore Show HN posts, but in the past 6 months I’ve wasted a lot of time following HN links that looked interesting but turned out to be AI slop. Several Show HN posts lately have taken me to repos that were AI generated plagiarisms of other projects, presented on HN as their own original ideas.

Seeing comments warning about the AI content of a link is helpful to let others know what they’re getting into when they click the link.

For this article the accusations are not about slop (which will waste your time) but about tell-tell signs of AI tone. The content is interesting but you know someone has been doing heavy AI polishing, which gives articles a laborious tone and has a tendency to produce a lot of words around a smaller amount of content (in other words, you’re reading an AI expansion of someone’s smaller prompt, which contained the original info you’re interested in)

Being able to share this information is important when discussing links. I find it much more helpful than the comments that appear criticizing color schemes, font choices, or that the page doesn’t work with JavaScript disabled.

  • > you’re reading an AI expansion of someone’s smaller prompt, which contained the original info you’re interested in

    This got me thinking: what if LLMs are used to do the opposite? To condense a long prompt into a short article? That takes more work but might make the outcome more enjoyable as it contains more information.

    • > This got me thinking: what if LLMs are used to do the opposite? To condense a long prompt into a short article? That takes more work but might make the outcome more enjoyable as it contains more information.

      You're fighting an uphill battle against the inherent tendency to produce more and longer text. There's also the regression to the mean problem, so you get less information (and more generic) even though the text is shorter.

      Basically, it doesn't work

You're suggesting this is the complainant's fault?

  • Yes. These HN guidlines already basically cover it:

    > Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

    > Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

    • > Yes. These HN guidlines already basically cover it:

      >> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

      >> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

      They don't. people. tangential.

  • Yes, because all of them are now irrational about the possibility of LLM writing something they read.

HN has gotten to the point where it’s not even worth clicking the link because of course it’s ai slop.

There is some real content in the haystack, but we almost need some kind of curator to find and display it rather than a vote system where most people vote on the title alone.