← Back to context

Comment by itsoktocry

3 days ago

I've been a long-time Twitter user. I don't hate Elon, so when he bought it I was cautiously optimistic.

I deactivated last week. The platform is bad and getting worse. It's scammy and spammy. Everything is designed around garbage engagement, so that the X team can brag about how good the product is doing.

I follow a couple of writers on X through Nitter on a desktop browser. These writers inevitably draw bot comments whenever they touch on something relevant to some or another powerful country’s politics. For me, it’s easy to verify that these commentators (who often have convincing-sounding fake names and photos) are bots by simply ctrl-clicking on the commenters’ usernames and, in the tab that immediately opens, seeing at a glance that they post weird single-issue material at an unusually sporadic pace, and often in tellingly flawed English.

Do I suspect correctly that in the way most people consume X, though the official website or an app, this is not so transparent? Whether because opening new views is so slow on a phone screen, or because the official interfaces probably intersperse content with advertisements and other visual crap? I don’t think state actors would be so active in trying to manipulate discourse if the platform hadn’t degraded to a point where their activity isn’t obvious to most users.

  • Why do bots have flawed english? Seems like with LLMs being a thing they would not.

    • “Bots” is a cover term for both purely automated scripts, and for human posters who are using some kind of tools to post more efficiently in order to manipulate discourse.

      In this case, it’s obvious that a lot of Russian state-actor employees, for instance, are not passing their writing through an LLM, but rather are just quickly vomiting out a comment in their imperfect English. Exposés of Russian troll factories show that a lot of these employees are young university-educated people who only want the money, and don’t have strong feelings for the propaganda they are posting, so they half-arse it.

    • They're not necessarily bots in the sense of automated accounts but the older troll farms with a bunch of people just clicking away.

It's a full PvP server now. Old Social media outrage algos + paying people for posts further broke it

  • When I left about a year ago the whole feed was entirely just bot slop from verified accounts. It was impossible to tune or subscribe your way in to a good feed. I imagine it's so much worse now with all the AI generated content.

I prefer the X now. Unlimited stream of unhinged, unfiltered thought stream from strangers straight into my feed.

Only last week is shocking to me. People were saying this about twitter for like 10+ years as soon as it was commercialized and was no longer just user content.

I am honestly curious what Elon would need to do for you to dislike him. That ship sailed for me long ago

I mean his personal lack of ethics, bigotries, greed, and ignorance is what directly made twitter what it is today. Maybe you should dislike him and hold him in low opinion.

What is garbage engagement?

I think its entirely reasonable that an algorithm shows you things that you engaged with. It would be weird if it didn't promoted stuff I didn't engage w/.

  • garbage engagement are posts so obviously wrong/provoking/you name it that you must exercise supreme self control to not engage with the content. And for some people it is quite difficult to do so algorithm thinks that, hey this is trending so might be i should show this to more people. So this garbage turns up on your stream. I bean dealing with this by straight up blocking such accounts, but this is loosing battle in the sea of bots :)

    • Person A: Says something exceptionally inflammatory and provably false

      Person B-Z: That's a horrible thing to say, why are you like this?

      Algorithm: Wow, this post must be awesome, I should show it to more people!

      1 reply →

  • A better term might be antagonism. X seemed to switch to a system of rewarding views as a method of engagement far above all else, which led to people (generally and deliberately) ramping up the extremeness of their hot takes in a bid to get as much attention as possible.

    A parallel term is "hate click", where there's a headline that's so stupid or off that you click it just to see what the hell they were talking about.

    An example of this vile genre was someone tweeting about how:

    "Teachers make plenty of money, and I think they should provide school supplies to their students out of their own pocket instead of making hard-working parents pay for them."

    It was a message _designed_ to get people to yell at them, and for all of that, it wasn't any of the really hot-button stuff around politics, race, or any of the other divisive things that drive antagonistic engagement.

    Twitter could have (and previously did) reward all sorts of other types of engagement, but the shift to rewarding divisiveness was just at another level.