Comment by octopoc
1 year ago
Many of us are in the tech industry. Has anyone seen this sort of thing happen? Or is this something we wouldn't be aware of since it's usually on the "content moderation" side of the house? (I haven't ever worked in social media, so not really familiar with content moderation other than removing the occasional dick pic)
Edit:
> Most of them are product managers, software developers. … They work with the policy teams with an internal set of tools to forward links and explanations about why they need to be removed.
If you look at the extensive reporting done by racket.news around the Twitter Files and Facebook Files, you can learn about the direct back channels many government agencies had to directly report thousands of people to be banned or shadow banned. A federal court judge concluded that it was the biggest violation of free speech in modern history. He ordered the government to no longer contact the social media companies unless something is found to be illegal. This applied to government agencies but does not apply to these groups that might be organized or funded by a foreign government.
I remember browsing through the Twitter Files and finding nothing interesting in them.
Yes, all social media have open channels with law enforcement. That's because social media have legal obligations and when someone comes to a moderator claiming to be a law enforcement officer working on a kidnapping or preventing a terrorist attack and needing time-sensitive help to save lives, you don't want the moderator to have to guess whether that's a real emergency or a hoax.
It's... not a secret. If you live in a democracy, you can quickly find out the name of these channels, they have websites.
Source: I've been part of a moderation team. Not on something that large, though.
>they have websites
That's interesting, can you link to one?
4 replies →
You may not have read enough. It went way beyond law enforcement which would have been fine and legal. People were censored for talking about dangers of vaccines, war with Russia, and whatever the administration, FBI, or other government agencies determined to be malinformation. The impact of this censorship is still being felt to this day. People were misinformed about the vaccine and the war causing the deaths of millions of people. If people understood that Ukraine had no chance of defeating Russia or that the largely untested vaccine was not safe, many people could still be alive today.
2 replies →
> If you look at the extensive reporting done by racket.news around the Twitter Files and Facebook Files, you can learn about the direct back channels many government agencies had to directly report thousands of people to be banned or shadow banned. A federal court judge concluded that it was the biggest violation of free speech in modern history.
There was no such thing in the Twitter files.
I've been part of a moderation team in a (much) smaller context. Most people want to do good work, but in the end, we're all human, so of course anybody could be influenced, especially in such volatile situations.
How far people are actually influenced and in which direction... that's anybody's guess.
What if there is pressure from bosses, from outsiders, socially. What if they say, 'are you supporting terrorists?'
Honest answer?
You work at one of these companies for enough years and someone will accuse you of supporting terrorists eventually.
What you learn working for a multinational corporation is that as an international community, people don't agree on much. Including definitions of "terrorism," fairness, geopolitical borders, or the law.
It's a weird feeling. If you ever wonder how companies can stray so far from "obvious" morality... That's how. Things get a lot less obvious when you're in the position that everyone has an opinion and the opinions often conflict.
So to answer your question more directly... It doesn't take long for outsiders accusing you of supporting terrorism to be met (if only in your own internal filters) with "Oh you have a problem with my approach? Get in line."
(On the flip side, a lot of the training for people acting in that capacity in a big corp is how not to get phished. When you are in the front-line of moderation / customer interaction / etc., bad actors will attempt to use you to compromise third parties. There's a reason there are formal processes for dealing with law enforcement, for example).
1 reply →
It’s cognitive dissonance to believe that politicians can be bought and social media companies or their content moderation teams/employees cannot.
only one data point, but fwiw when I worked for Google I found some actively toxic youtube content w/upwards of 500k views that was telling children to off themselves, and despite using my employee back-channel connections the most I was able to get was an eventual "I'm not allowed to do anything about this" from a YouTube moderator, though it seemed to be for technical reasons (all the nasty content was in annotations, which apparently weren't wired into the moderation pipeline). There definitely wasn't a red button for me to hit as an employee to get it taken down.
That seems 4chan levels of vile.
I ended up digging around on the channel and tracked it back to some people of that type, and they had some other uploads that were basically gloating that the video was immune to moderation. It was a rip of the Undertale soundtrack, so laser-targeted at kids (if you're unfamiliar with Undertale, it's recognizable enough that one of its characters got added into one of Nintendo's games)
Sadly if the Undertale soundtrack was aggressively content ID'd/DMCA'd, that would have been a way to take it down. But that would penalize everyone who uploads footage of that game, so obviously that's not done.
Yet if the video sampled Metallica for too long, it would be removed and the feds at your door within minutes. Such is an algorithm that is tuned ad revenue and lawsuites, as opposed to protection. The above story just confirms what the scammers in this video say about youtube about wholesale content scamming with AI editing software.
https://youtu.be/ZMfk-zP4xr0?si=R3RxVJJ7WxhKDj_L
I used to help run a Facebook page that shared a variety of content. We'd post political things sometimes and we'd get a few angry messages, that was normal.
One incident stands out because we received far more messages than I'd ever seen, it was the time we posted a news story about Netanyahu blaming a Palestinian for the Holocaust. We got several messages about what horrible lying racists we were, that was common to all the messages, but they had one main difference. About half the messages claimed Netanyahu never said what he said. The other half claimed he did say it, but he was right.
> Has anyone seen this sort of thing happen?
Yes, of course. Content management is the expected standard when dealing with crowdsourced content. This includes any data coming from social media. This is subject to essentially private, subjective concerns.
Even if you look into it from the outside you see it happen. This isn't a new phenomenon, it really took off steam around 2014 or 2015. Then when Trump was elected it hit another level.
It was clearly shown in the Twitter files that there are many relations deep into social media companies and that is very likely true for every larger platform.
It would be surprising if there weren't backchannels, because they have become relevant, sadly.
[dead]