Comment by rudhdb773b
10 hours ago
Not to single out your comment, but it feels like it's gotten to the point where HN could use a rule against complaining about AI generated content.
It seems like almost every discussion has at least someone complaining about "AI slop" in either the original post or the comments.
I disagree. I like to read articles and explore Show HN posts, but in the past 6 months I’ve wasted a lot of time following HN links that looked interesting but turned out to be AI slop. Several Show HN posts lately have taken me to repos that were AI generated plagiarisms of other projects, presented on HN as their own original ideas.
Seeing comments warning about the AI content of a link is helpful to let others know what they’re getting into when they click the link.
For this article the accusations are not about slop (which will waste your time) but about tell-tell signs of AI tone. The content is interesting but you know someone has been doing heavy AI polishing, which gives articles a laborious tone and has a tendency to produce a lot of words around a smaller amount of content (in other words, you’re reading an AI expansion of someone’s smaller prompt, which contained the original info you’re interested in)
Being able to share this information is important when discussing links. I find it much more helpful than the comments that appear criticizing color schemes, font choices, or that the page doesn’t work with JavaScript disabled.
> you’re reading an AI expansion of someone’s smaller prompt, which contained the original info you’re interested in
This got me thinking: what if LLMs are used to do the opposite? To condense a long prompt into a short article? That takes more work but might make the outcome more enjoyable as it contains more information.
> This got me thinking: what if LLMs are used to do the opposite? To condense a long prompt into a short article? That takes more work but might make the outcome more enjoyable as it contains more information.
You're fighting an uphill battle against the inherent tendency to produce more and longer text. There's also the regression to the mean problem, so you get less information (and more generic) even though the text is shorter.
Basically, it doesn't work
You're suggesting this is the complainant's fault?
Yes. These HN guidlines already basically cover it:
> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
> Yes. These HN guidlines already basically cover it:
>> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
>> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
They don't. people. tangential.
Yes, because all of them are now irrational about the possibility of LLM writing something they read.
HN has gotten to the point where it’s not even worth clicking the link because of course it’s ai slop.
There is some real content in the haystack, but we almost need some kind of curator to find and display it rather than a vote system where most people vote on the title alone.
If you’re looking for a place that surfaces only human-written content regardless of whether it’s interesting, rather than interesting content regardless of how it was written, HN is not the place.
There might be a market for your alternative though. Should be easy enough to build with Claude Code.
If the content was interesting, the author would've written about it himself.
By asking AI to write the article for you, you're asserting that the subject matter is not interesting enough to be worth your time to write, so why would it be worth my time to read?
1 reply →
I know the author personally. He's hardly the type of person to publish AI slop. Read his other articles and watch his talks, this is very much Henry's literary style.
> Read his other articles
Sure, let me have a look.
He wrote 8 similarly lengthy blog posts in just 2 months: https://www.juxt.pro/blog/from-specification-to-stress-test/ https://www.juxt.pro/blog/three-paradoxes/ https://www.juxt.pro/blog/what-outlasts-the-code/ https://www.juxt.pro/blog/composition-at-a-distance/ https://www.juxt.pro/blog/new-vocabulary-for-an-old-problem/ https://www.juxt.pro/blog/softwares-second-heroic-age/ https://www.juxt.pro/blog/capability-hyperinflation/
They contain a lot of classic LLMisms:
"Implementation is the shrinking currency. Not because it’s worthless, but because supply is exploding."
His past writing was much, much less wordy: https://henrygarner.com/
Stop voting up slop articles and I'll stop commenting on it.
Point to one.
This is on the front page now https://rajnandan.com/posts/taste-in-the-age-of-ai-and-llms/