← Back to context

Comment by aaroninsf

1 month ago

I wouldn't say popular

It has a strong smell of "stop trying to make fetch happen, Gretchen."

It's definitely popular online, specifically on Reddit, Bluesky, Twitter, and TikTok. There's communities that have formed around their anti-AI stance[1][2][3], and after multiple organic efforts to "brainstorm slurs" for people who use AI[4], "clanker" has come out on top. This goes back at least 2 years[6] in terms of grassroots talk, and many more to the original Clone Wars usage[7].

For those who can see the obvious: don't worry, there's plenty of pushback regarding the indirect harm of gleeful fantasy bigotry[8][9]. When you get to the less popular--but still popular!--alternatives like "wireback" and "cogsucker", it's pretty clear why a youth crushed by Woke mandates like "don't be racist plz" are so excited about unproblematic hate.

This is edging on too political for HN, but I will say that this whole thing reminds me a tad of things like "kill all men" (shoutout to "we need to kill AI artist"[10]) and "police are pigs". Regardless of the injustices they were rooted in, they seem to have gotten popular in large part because it's viscerally satisfying to express yourself so passionately.

[1] https://www.reddit.com/r/antiai/

[2] https://www.reddit.com/r/LudditeRenaissance/

[3] https://www.reddit.com/r/aislop/

[4] All the original posts seem to have now been deleted :(

[6] https://www.reddit.com/r/AskReddit/comments/13x43b6/if_we_ha...

[7] https://web.archive.org/web/20250907033409/https://www.nytim...

[8] https://www.rollingstone.com/culture/culture-features/clanke...

[9] https://www.dazeddigital.com/life-culture/article/68364/1/cl...

[10] https://knowyourmeme.com/memes/we-need-to-kill-ai-artist

  • Citations eight and nine amuse me.

    I readily and merrily agree with the articles that deriving slurs from existing racist or homophobic slurs is a problem, and the use of these terms in fashions that mirror actual racial stereotypes (e.g. "clanka") is pretty gross.

    That said, I think that asking people to treat ChatGPT with "kindness and respect" is patently embarrassing. We don't ask people to be nice to their phone's autocorrect, or to Siri, or to the forks in their silverware drawer, because that's stupid.

    ChatGPT deserves no more or less empathy than a fork does, and asking for such makes about as much sense.

    Additionally, I'm not sure where the "crushed by Woke" nonsense comes from. "It's so hard for the kids nowadays, they can't even be racist anymore!" is a pretty strange take, and shoving it in to your comment makes it very difficult to interpret your intent in a generous manner, whatever it may be.

    • > I think that asking people to treat ChatGPT with "kindness and respect" is patently embarrassing. We don't ask people to be nice to their phone's autocorrect, or to Siri, or to the forks in their silverware drawer, because that's stupid.

      > ChatGPT deserves no more or less empathy than a fork does.

      I agree completely that ChatGPT deserves zero empathy. It can't feel, it can't care, it can't be hurt by your rudeness.

      But I think treating your LLM with at least basic kindness is probably the right way to be. Not for the LLM - but for you.

      It's not like, scientific - just a feeling I have - but it feels like practicing callousness towards something that presents a simulation of "another conscious thing" might result in you acting more callous overall.

      So, I'll burn an extra token or two saying "please and thanks".

      17 replies →

    • No time for a long reply, but what I want to write has video games at the center. Exterminate the aliens! is fine, in a game. But if you sincerely believe it's not a game, then you're being cruel (or righteous, if you think the aliens are evil), even though it isn't real.

      (This also applies to forks. If you sincerely anthropomorphize a fork, you're silly, but you'd better treat that fork with respect, or you're silly and unpleasant.)

      What do I mean by "fine", though? I just mean it's beyond my capacity to analyse, so I'm not going to proclaim a judgment on it, because I can't and it's not my business.

      If you know it's a game but it seems kind of racist and you like that, well, this is the player's own business. I can say "you should be less racist" but I don't know what processing the player is really doing, and the player is not on trial for playing, and shouldn't be.

      So yes, the kids should have space to play at being racist. But this is a difficult thing to express: people shouldn't be bad, but also, people should have freedom, including the freedom to be bad, which they shouldn't do.

      I suppose games people play include things they say playfully in public. Then I'm forced to decide whether to say "clanker" or not. I think probably not, for now, but maybe I will if it becomes really commonplace.

      1 reply →

    • Sorry, I was unclear — that racism comment was tongue in cheek. Regardless of political leanings, I figured we can all agree that racism is bad!

      I generally agree re:chatGPT in that it doesn’t have moral standing on its own, but still… it does speak. Being mean to a fork is a lot different from being mean to a chatbot, IMHO. The list of things that speak just went from 1 to 2 (humans and LLMs), so it’s natural to expect some new considerations. Specifically, the risk here is that you are what you do.

      Perhaps a good metaphor would be cyberbullying. Obviously there’s still a human on the other side of that, but I do recall a real “just log off, it’s not a real problem, kids these days are so silly” sentiment pre, say, 2015.

  • >after multiple organic efforts to "brainstorm slurs" for people who use AI

    no wonder it sounds so lame, it was "brainstormed" (=RLHFed) by committee of redditors

    this is like the /r/vexillology of slurs