Comment by vannevar

8 days ago

>I'll take the easy case of real world bullying first - people know their bullies here. It does not stop bullying. Attackers tend to target groups/individuals that cannot fight back.

But in an online forum where the bully is known and can be banned/blocked permanently, everyone can fight back.

>Now you asked for evidence that either statement was true, but then spoke about reducing harassment. These are not the same things.

Of course there will continue to be harassment on the margins, where people could reasonably disagree about whether it's harassment. But even in those cases, the victims can easily and permanently block any interaction with the harasser. Which removes the gratification that such bad actors seek.

>Incivility was lower in the case where identities were exposed, however this did not stop incivility.

I think we're getting hung up on what 'stop' means in this context. If I have 100 incidences per day of incivility before verification, and only 20/day after, then I've stopped 80 cases/day. Have I stopped all incivility? No, but that was not the intent of my statement. I think it will drastically reduce bullying and misinformation, but there will always be people who come into the new forum and push the envelope. But they won't be able to accumulate, as they are rapidly blocked and eventually banned. The vast majority of misinformation and bullying comes from a small number of repeat offenders. Verification prevents the repetition.

Have you moderated in a verified context, where a banned actor cannot simply create a new account? I feel like there are very few such platforms currently, because as you point out, it's expensive and so for-profit social media prefers anonymity. But if we're all spending a significant part of our lives online, and using these platforms as a source of critical information, it's worth it.

One context where everyone is verified is in a typical business---your fellow employees, once fired, cannot simply create another company email account and start over. And bad apples who behave anti-socially are weeded out. People generally behave civilly to each other. So clearly such a system can and does work, most of us see it on a daily basis.

>But in an online forum where the bully is known and can be banned/blocked permanently, everyone can fight back.

Firstly, please acknowledge that knowing the identity of the attacker doesn’t stop bullying. Ignoring or papering over it, deprives arguments of supports that are required to be useful in the real world.

There is a reason I pointed out that it doesn’t stop harassment, because it disproves the contention that anonymity is the causal force for harassment.

The argument for reducing harassment via anonymity is supported, but it results in other issues. In a fully de-anonymized national social media platform, people will target minorities, immigrants and nations from other countries. I.E whatever is the acceptable jingoism and majority view point. Banning such conversation will put the mods in the cross hairs.

And yes, if it reduced harassment by 80%, that would be something. However the gains are lower (from that paper, it seemed like a 12% difference).

——-

I am taking great pains to separate out misinfo from bullying / harassment.

For misinformation, the first section, about minute 3 to minute 4, where Rob Faris speaks, does a better job of articulating the modern mechanics : https://www.youtube.com/watch?v=VGTmuHeFdAo

The larger issue for misinformation, is that it has utility for certain political groups and users today. It allows them the ability to create narratives and political speech faster and more effectively.

Making a more nuanced point will end up with me elaborating on my personal views on market capture for ideas. The shortest point i can make about misinformation, journalism and moderation is this:

Factual accuracy is expensive, and an uncompetitive product, when you are competing in a market that is about engagement. Right now, I don’t see journalism, science, policy - slow, fact and process limited systems, competing with misinformation.

Solving the misinformation problem will require figuring out how to create a fair fight / fair competition, between facts and misinformation.

Since misinformation purveyors can argue they have freedom of speech, and since they are growing increasingly enmeshed with political power structures, simple moves like banning are shifting risk to moderators and platforms - all of whom have a desire to keep living their lives without having to be harassed.

For what it’s worth, I would have argued the same thing as you until a few scant months ago. The most interesting article I read showed that the amount of misinformation that is consumed is a stable % of total content consumed. Indicating that while supply and production capacity of misinformation may increase, the demand is limited. This coupled with the variety of ways misinformation can be presented, and the ineffectiveness of fact checkers at stopping uptake, forced a rethinking of how to effectively address all that is going on.

——-

I don’t have information on how behavior is in a verified context. I have some inklings of seeing this at some point, and eventually being convinced this was not a solution. I’ll have to see if I end up finding something.

I think one of the issues is that verification is onerous, and results in a case where you can lose your ID and then have all the real world challenges that come with it, while losing the benefits that come from being online. There’s a chilling effect on speech in both directions. Anonymity was pretty critical to me being able to even learn enough to make the arguments I am making, or to converse with people here.

If theres a TLDR to my position, it’s that the ills we are talking about are symptoms of dysfunction in how our ecosystem is behaving. So these solutions will only shift the method by which they are expressed. I would agree that it’s a question of tradeoffs. To which my question is what are we getting for what ground we are conceding.

  • >There is a reason I pointed out that it doesn’t stop harassment, because it disproves the contention that anonymity is the causal force for harassment.

    I agree, of course, that anonymity doesn't cause harassment. The vast majority of anonymous users do not harass other users. But anonymity does facilitate harassment and bullying, by making it difficult to punish the behavior.

    >I am taking great pains to separate out misinfo from bullying / harassment.

    I agree that they are conceptually separate things, though there can be overlap in practice. I suspect that people who harass others also tend to spread misinformation, since often online harassment is based on bigotry, which goes hand-in-hand with ignorance and misinformation.

    Verification doesn't make the problem of identifying misinformation easier, it only permits removal of the sources when it is identified. But the online environment would still be greatly improved if it became easier to remove obviously inaccurate material. And for less-clear situations, debunking would be more incentivized if it resulted in removal and not an endless game of anonymous whack-a-mole.

    I think the future of online communities is going to trend towards smaller, verified groups. There will still be open platforms where everyone can be anonymous and interact with everyone else---like a giant masquerade party---but digital "home" will be among a known community.