Comment by diacritical
5 days ago
Regulating content that makes people enraged seems like a slippery slide towards regulating any kind of "unwanted" speech. I get regulating CSAM, calls for violence or really obvious bullying (serious ones like "kill yourself" to a kid), but regulating algorithms that show rage bait leaves a lot of judgement to the regulators. Obviously I don't trust TikTok or Meta at all, but I don't trust the current or the future governments with this much power.
For example, some teen got radicalized with racist and sexist content. That's bad in my opinion, as I'm not a racist or a sexist. But should racist or sexist speech be censored or regulated? On what grounds? How do we know other unpopular (now or in the future) speech won't be censored or regulated in the future? Again, as much as I'm not a racist or sexist, I don't think the government should have a say in whether a company should be able to promote speech like "whites/blacks are X" or "men/women are Y". What's next? Should we regulate speech about religion (Christians/Muslims/atheists are Z) or ethics (anti-war people or vegans are Q) or politics or drugs or sex?
The current situation is shitty, but giving too much power to regulators will likely make it way shittier. If not now, in the future, since passed regulations are rarely removed.
At least in the US the government can't regulate speech (for the most part). But what we could do is regulate recommendation algorithms or other aspects of the overall design in a way that's generalized enough to be neutral in regards to any particular speech. And such regulations don't need to apply to any entity below some MAU or other metric.
Even just mandating interoperability would likely do since that would open up the floor to competitors. Many users are well aware of the issues but don't feel they have a viable alternative that satisfies their goals.
In theory I'm OK (kinda) with regulating the "overall design" somehow, but I don't see how it's going to work. Forced interoperability is a (very?) good idea, as it's really general, but it also doesn't address directly what the article and most comments talk about - the rage bait. I just can't imagine regulations (or "laws" or whatever the correct term is) that deal specifically with the algos that push rage bait that can't be later abused, if passed, to deal with other unpopular speech. And it seems like people want some laws to directly deal with that - the bad types of speech or algos themselves.
To clarify, I use "rage bait" as an example phrase, but it includes algos that only promote engagement at any cost and other things that aren't outright dangerous, but we think are dangerous. Not, like I said, CSAM or yelling FIRE or telling people to kill themselves.
Interoperability sidesteps the issue by giving users the choice of which algorithm (or algorithm provider) to use. The majority might or might not agree with that approach - for example obviously tobacco has not been left purely to the individual's judgment in the west.
Agreed, you can't regulate speech in a targeted manner while also not doing so. You're forced to find some common aspect much more general than "rage bait". Perhaps prohibiting the targeting of certain metrics? Or even prohibiting their collection in the first place.
4 replies →
> I get regulating CSAM, calls for violence or really obvious bullying (serious ones like "kill yourself" to a kid)
I’ve reported videos that look like sexual exploitation, videos that call for violence and videos that promote hate (xyz people are cockroaches) and all I’ve gotten is that “it does not go against community guidelines” with a link to block the person who created them. So any concerns of “where do we draw the line” are in my opinion pointless because the bare minimum isn’t even being done.
I agree with your CSAM and explicit calls for violence examples - they probably should be regulated. But a few comments ago in another thread someone didn't like me calling people in the workplace who annoy me with their mindless chit chat "corporate drones". My post could be construed as promoting hate. Where do we draw the line from "cockroaches" to "drones"? Do I have to call a certain "protected class" drones for it to qualify as hate speech?
What if I didn't say anything bad about a race or a sex, but said:
> I have coworkers that pester with me with their small talk about the weather every time I see them. I hate those fucking cockroaches.
That's in bad taste, sure, but should it be regulated? You may know I obviously don't hate-hate them (they're just annoying, but most of them are good people) or actually consider them cockroach-like in any meaningful aspect (they're obviously people, but with annoying tendencies). But would a regulator know the difference? What about a malicious regulator who gets paid by (ok, this is a silly example, but bear with me) the weather-talking coworker lobby to censor me? In many not-so-silly examples a regulator could silence anyone for anything (politics, sex, drugs, ethics), as long as it uses a bad word or says anything negative about anyone. I don't want to live in such a society. That much power would be abused sooner or later.
I'm sorry but are you saying it's hard to figure out what to do so let's do nothing? Banning racist and sexist content is not a slippery slope. It's just banning racist and sexist content, slope is only slippery because the salivating mouths of these social platforms grease them.
Also, I don't think people are advocating censorship, they are advocating not promoting assholes. You can have your little blog and be racist on it all you want, but let's not give these people equivalent of nukes for communication.
> are you saying it's hard to figure out what to do so let's do nothing?
I'm fine with doing something, but the current "something" seems slippery.
> Banning racist and sexist content is not a slippery slope. It's just banning racist and sexist content, slope is only slippery because the salivating mouths of these social platforms grease them.
But what is "racist", exactly? See why I think it's a slippery slope and why it's ill-defined:
1. We could agree that "Let's go out and kill/enslave all the $race/$gender" is racist, but that's bad if we switch $race to any group, as it's speech that incites violence.
2. What about "$race is genetically inferior in a way (less intelligent, less athletic, more prone to $bad_behavior)"? I honestly think most differences in race/ethnicity is due to environmental factors, but what if there actually are difference in intelligence or anything like that? Should we ban speech that discusses that? Black people win running races and are great at basketball. They're prone to certain diseases, as are Caucasians or Asians. So would you ban discussing that? Or would you ban blindly asserting that $race is $Y without some sort of proof?
3. What about statements like "There are way more male bus drivers because X"? Or "men are better at Y, but women are better at Z"?
What do you think the definition of racism and sexism in this context should be? I think the line is where we incite violence towards a group, but not about discussing differences that may or may not be true.
> Also, I don't think people are advocating censorship, they are advocating not promoting assholes. You can have your little blog and be racist on it all you want, but let's not give these people equivalent of nukes for communication.
I think restricting a platform (or anyone or anything) from promoting someone IS censorship. If it's not censored, why shouldn't I be able to promote it? This honestly feels disingenuous - like "we pretend that the racist isn't censored and can have his little blog, but it's illegal to promote his little blog".
It's easy, let's start with banning 1. Obvious incitement of violence. If they can enforce just that much it would be great.
> I'm sorry but are you saying it's hard to figure out what to do so let's do nothing?
That seems more reasonable than the alternative, which is to make modifications to a complex system which you aren't sure what the outcome will be. You're more likely to cause bigger problems.