Comment by n4r9
5 months ago
I'm saying that Trump's re-election is not a compelling counter-example to the general argument for banning disinformation, because he was "re-"platformed for over a year by the time of the election.
5 months ago
I'm saying that Trump's re-election is not a compelling counter-example to the general argument for banning disinformation, because he was "re-"platformed for over a year by the time of the election.
You underestimate how much seeing a sitting president be deplatformed affected the voting public. It wasn’t just Musk, all this talk of “deplatforming” people on the right was an obviously clear erosion of free speech that pushed many moderates like myself rightward.
It wasn’t just banning Trump either, tbh one of the biggest ones was the banning of the Babylon Bee for a pretty tame joke. There’s a long list of other right-leaning accounts which were banned during that time as well.
I mean, who knows how well Trump would have done had he not been re-admitted to Twitter. It's a counter-factual. For what it's worth, I'm not advocating de-platforming right-wing voices. I just think there's an argument to be made that social media platforms have a responsibility to mitigate misinformation and incitements to violence. It should be done in a transparent and impartial manner. There are high-profile right-wing accounts that spread a lot of misinformation trying to whip up a frenzy. In the UK, Musk's un-banning brought accounts like Katie Hopkins, Andrew Tate, and Tommy Robinson back online, a consequence of which was a series of violent riots last summer fuelled by false claims and Islamophobia. I hear people arguing that as long as anyone can share their ideas, then the truth will bubble to the top. Well, that's not how it's playing out.
Having private companies try and label things themselves which are misinformation or incitements to violence is a slippery slope which has never worked well in practice. As soon as you have a person in a company who's job it is to decide whether something is misinformation or not they immediately will apply their own personal biases.
The approach of allowing everything that is _legal_ to say is much better. If it is allowed by a court of law then companies should not be trying to apply their own additional filters. It can be downranked in the algorithm but at least allowing legal speech is important.
Even just looking at your statement, lumping Andrew Tate in with Tommy Robinson is a completely subjective thing, they are two wildly different people. Everything Tommy Robinson has said is true, he regularly states that he doesn’t care about race, he rejects white supremacists, and has a movement filled with peaceful normal Brits. Nothing he says or does is violent or illegal, his claims about Pakistani rape gangs are supported by evidence and first hand testimony. And more generally: not wanting to become a hated minority in your own country is not an extremist position. It doesn’t mean you hate others for their skin color or whatever type of “phobic” label you care to apply. People vote repeatedly for a government to stop the boats and every government that gets elected decides not to try for some mysterious reason, people are justifiably angry that their elected officials are doing the opposite of what they voted for.
Andrew Tate is yes of course a controversial dumb guy who does say things which are pretty out there, but the principle of allowing him to say everything which is legal in a court of law is important. Most normal people recognize that he’s outside the Overton window on many topics and it’s generally easy to counter his speech with better speech. But lumping crazies like Tate in with legitimate people like Robinson is a common tactic to delegitimize the people you disagree with.
5 replies →