Comment by swed420

1 day ago

> This is a form of lazy thinking, because it assumes everyone is equally affected. This is not what we see in reality, and several sections of the population are more prone to being converted by manipulation efforts.

Making matters worse, one of the sub groups thinks they're above being manipulated, even though they're still being manipulated.

It started by confidently asserting over use of em dashes indicates the presence of AI, so they think they're smart by abandoning the use of em dashes. That is altered behavior in service to AI.

A more recent trend with more destructive power: avoiding the use of "It's not X. It's Y." since AI has latched onto that pattern.

https://news.ycombinator.com/item?id=45529020

This will pressure real humans to not use the format that's normally used to fight against a previous form of coercion. A tactic of capital interests has been to get people arguing about the wrong question concerning ImportantIssueX in order to distract from the underlying issue. The way to call this out used to be to point out that, "it's not X1 we should be arguing about, but X2." This makes it harder to call out BS.

That sure is convenient for capital interests (whether it was intentional or not), and the sky is the limit for engineering more of this kind of societal control by just tweaking an algo somewhere.

I find “it’s not X, it’s Y” to be a pretty annoying rhetorical phrase. I might even agree with the person that Y is fundamentally more important, but we’re talking about X already. Let’s say what we have to say about X before moving on to Y.

Constantly changing the topic to something more important produces conversations that get broader, with higher partisan lean, and are further from closing. I’d consider it some kind of (often well intentioned) thought terminating cliche, in the sense that it stops the exploration of X.

  • The "it's not X, it's Y" construction seems pretty neutral to me. Almost no one minds when the phrase "it's not a bug, it's a feature" is used idiomatically, for example.

    The main thing that's annoying about typical AI writing style is its repetitiveness and fixation on certain tropes. It's like if you went to a comedy club and noticed a handful of jokes that each comedian used multiple times per set. You might get tired of those jokes quickly, but the jokes themselves could still be fine.

    Related: https://www.nytimes.com/2025/12/03/magazine/chatbot-writing-...

  • > Constantly changing the topic to something more important produces conversations that get broader, with higher partisan lean

    I'm basing the prior comment on the commonly observed tendency for partisan politics to get people bickering about the wrong question (often symptoms) to distract from the greater actual causes of the real problems people face. This is always in service to the capital interests that control/own both political parties.

    Example: get people to fight about vax vs no vax in the COVID era instead of considering if we should all be wearing proper respirators regardless of vax status (since vaccines aren't sterilizing). Or arguing if we should boycott AI because it uses too much power, instead of asking why power generation is scarce.