Comment by WD-42
2 days ago
Where are these places where everything is written by a LLM? I guess just don’t go there. Most of the comments on HN still seem human.
2 days ago
Where are these places where everything is written by a LLM? I guess just don’t go there. Most of the comments on HN still seem human.
i think the frontpage of hn has had at least one llm-generated blog post or large github readme on it almost every day for several months now
Tbh I prefer to read/skim the comments first and only occasionally read the original articles if comments make me curious enough. For now I never ended checking something that would seem AI generated.
It’s pretty much all you see nowadays on LinkedIn. Instagram is infected by AI videos that Sora generates while X has extremist views pushed up on a pedestal.
The HN moderation system seems to hold, at least mostly. But I have seen high-ranking HN submissions with all the subtler signs of LLM authorship that have managed to get lots of engagement. Granted, it's mostly people pointing out the subtle technical flaws or criticizing the meandering writing style, but that works to get the clicks and attention.
Frankly, it only takes someone a few times to "fall" for an LLM article -- that is, to spend time engaging with an author in good faith and try to help improve their understanding, only to then find out that they shat out a piece of engagement bait for a technology they can barely spell -- to sour the whole experience of using a site. If it's bad on HN, I can only imagine how much worse things must be on Facebook. LLMs might just simply kill social media of any kind.
Ironically this post is written in a pretty bland, 'blogging 101' style that isn't enjoyable to read and serves just to preach a simple, consensus idea to the choir.
These kinds of posts regularly hit the top 10 on HN, and every time I see one I wonder: "Ok, will this one be just another staid reiteration of an obvious point?"
True, but one of the least-explored problems with AI is that because it can regurgitate basic writing, basic art, basic music with ease, there is this question:
Why do it at all if I won't do better than the AI?
The worst risk with AI is not that it replaces working artists, but that it dulls human creativity by killing the urge to start.
I am not sure who said it first, but every photographer has ten thousand bad photos in them and it's easier if they take them at the beginning. For photographers, the "bad" is not the technical inadequacy of those photos; you can get past that in the first one hundred. The "bad" is the generic, uninteresting, uninspiring, underexplored, duplicative nature of them. But you have to work through that to understand what "good" is. You can't easily skip these ten thousand photos, even if your analysis and critique skills are strong.
There's a lot to be lost if people either don't even start or get discouraged.
But for writing, most of the early stuff is going to read much like this sort of blog post (simply because most bloggers are stuck in the blogging equivalent of the ten thousand photos; the most popular bloggers are not those elevating writing).
"But it looks like AI" is the worst, most reflexive thing about this, because it always will, since AI is constantly stealing new things. You cannot get ahead of the tireless thief.
The damage generative AI will do to our humanity has only just started. People who carry on building these tools knowing what they are doing to our culture are beneath our contempt. Rampantly overcompensated, though, so they'll be fine.
I continually resist the urge to deploy my various personas onto hn, because I want to maintain my original hn persona. I am not convinced other people do the same. It is not that difficult to write in a way that avoids some tell tale signs.
> I guess just don’t go there.
How do you know? A lot of the stuff I see online could very much be produced by LLMs without me ever knowing. And given the economics I suspect that some of it already is.
Many instagram and facebook posts are now llm generated to farm engagement. The verbosity and breathless excitement tends to give it away.
There was recently this link talking about AI slop articles on medium
https://rmoff.net/2025/11/25/ai-smells-on-medium/
He doesn't link many examples, but at the end he gives the example of an author pumping out +8 articles in a week across a variety of topics. https://medium.com/@ArkProtocol1
I don't spend time on medium so I don't personally know.
I've seen AI generated comments on HN recently, though not many. Users who post them usually only revert back to human when challenged (to reply angrily), which hilariously makes the change in style very obvious.
Of course, there might be hundreds of AI comments that pass my scrutiny because they are convincing enough.
LinkedIn
I see them regularly on several subreddits, I frequent.
There are already many AI-generated submissions on HN every day. Comments maybe less so, but I've already seen some, and the amount is only going to increase with time.
Every time I see AI videos in my YouTube recommendations I say “don’t recommend this channel” but the algorithm doesn’t seem to get the hint. Why don’t they make a preference option of “don’t show me AI content”
You assume that detecting AI content is trivial. It isn't.
Because they have a financial incentive not to.