Comment by HeinzStuckeIt

3 days ago

I follow a couple of writers on X through Nitter on a desktop browser. These writers inevitably draw bot comments whenever they touch on something relevant to some or another powerful country’s politics. For me, it’s easy to verify that these commentators (who often have convincing-sounding fake names and photos) are bots by simply ctrl-clicking on the commenters’ usernames and, in the tab that immediately opens, seeing at a glance that they post weird single-issue material at an unusually sporadic pace, and often in tellingly flawed English.

Do I suspect correctly that in the way most people consume X, though the official website or an app, this is not so transparent? Whether because opening new views is so slow on a phone screen, or because the official interfaces probably intersperse content with advertisements and other visual crap? I don’t think state actors would be so active in trying to manipulate discourse if the platform hadn’t degraded to a point where their activity isn’t obvious to most users.

Why do bots have flawed english? Seems like with LLMs being a thing they would not.

  • “Bots” is a cover term for both purely automated scripts, and for human posters who are using some kind of tools to post more efficiently in order to manipulate discourse.

    In this case, it’s obvious that a lot of Russian state-actor employees, for instance, are not passing their writing through an LLM, but rather are just quickly vomiting out a comment in their imperfect English. Exposés of Russian troll factories show that a lot of these employees are young university-educated people who only want the money, and don’t have strong feelings for the propaganda they are posting, so they half-arse it.

  • They're not necessarily bots in the sense of automated accounts but the older troll farms with a bunch of people just clicking away.