Comment by JonathanFly

22 days ago

> While I do agree with the content, this tone of writing feels awfully similar to LLM generated posts

> Commenter's history is full of 'red flags': - "The real cost of this complexity isn't the code itself - it's onboarding" - "This resonates."

Wow it's obvious in the full comment history. What is the purpose for this stuff? Do social marketing services maintain armies of bot accounts that just build up credibility by doing normal-ish comments, so they can called on later like sleeper cells for marketing? On Twitter I already have scroll down to find the one human reply on many posts.

And when the bots get a bit better (or people get less lazy prompting them, I'm pretty sure I could prompt to avoid this classic prose style) we'll have no chance of knowing what's a bot. How long until the majority of the Internet be essentially a really convincing version of r/SubredditSimulator? When I stop being able to recognize the bots, I wonder how I'll feel. They would probably be writing genuinely helpful/funny posts, or telling a touching personal story I upvote, but it's pure bot creative writing.

Building up karma, for its own sake or to gain the right to flag politically disagreeable content

> Do social marketing services maintain armies of bot accounts that just build up credibility by doing normal-ish comments, so they can called on later like sleeper cells for marketing?

Russia and Israel are known to run full time operations doing this for well over a decade. Twitter by their own account, 25% of users are/were bots back in 2015 (their peak user year). Even here on HN if you go look at the most trafficked Israel/Palestine threads, there are lots of people complaining about getting modded into oblivion, turning the conversation into neutral/pro israel, and silencing negative comments via a ghost army of modders.