← Back to context

Comment by omk

4 days ago

After seeing a synthetic version that mimics the tone well enough, the real HN once back here felt slightly less distinct. When every information style gets a believable AI twin, our usual cues for judging what’s credible start to wobble.

To be clear, the strange part wasn’t that it fooled me, it didn’t. The issue was some form of “signal contamination” that my brain experienced.

"What's credible" is an entirely different question to "what's human-made".

Do you not feel this "signal contamination" when seeing the normal HN feed?

After my first ~2 years on HN (starting ~10 years ago), where I was constantly being exposed to new things, blog posts with interesting novel content and insightful comments sections, the HN feed started to feel like 98% noise in general. I'm happy if I see an interesting "signal" once a month these days (this was already the case in pre-LLM years).

It's probable that LLMs are already operating on the real HN, agentically or driven by users who want to create intelligent-sounding comments for the sake of upvotes.

Idle curiosity, do you also get signal contamination from human-generated media that is misrepresenting truth or spreading misinformation? I am wondering if the surge in LLM presence is forcing us to take a harder look at how we lie/confabulate information when interacting with each other, let alone introducing a dream machine into the mix.