← Back to context

Comment by rob

17 hours ago

They get the privilege of immediately polluting the website with LLM-generated comments.

Many of them sound and look completely normal and have others on here interacting with them. They don't use em dashes, sometimes they'll use all lowercase text, sometimes the owner of the bot will come out and start commenting to throw you off.

All examples I've witnessed here.

HN should immediately start implementing at least some basic bot detection methods without requiring us to email them every time. I've discovered multiple bots make detailed comments within 30 seconds of each other in different threads, something a normal human wouldn't be able to do. That should be at least flagging the account for review. Obviously they'll get smarter and not do that soon but it would help in the short term.

I'd say it's not an issue but everything I described above has happened in less than a month and every day now I'm discovering bots here.

I do agree that bots are or will be an existential risk for every online forum. But I also think that an attempt to fix it that takes away anonymity is a cure that's worse than the disease.

My best understanding is yes -- there are signal that somebody is a bot (like how quickly they post), but if HN bans based on those signals then whoever made the bot will keep tweaking the code.

I feel like I rarely see bots in the top 5 comments of any article I read, or otherwise causing major disruption.

I think we just need to get creative about ways a platform can prove somebody is an invested human without tying it back to any personally identifiable information.