Comment by vannevar

8 days ago

>Verification does not stop harassment or bullying.

>It will not stop misinformation either.

I'm open to any evidence that either statement is true. The rational argument that verification will reduce harrassment, bullying, and misinformation is that the verified perpetrator can be permanently banished from the community for anti-social behavior, whereas an anonymous perpetrator can simply create a new account.

Do you have a rational counter-argument?

>If Reddit itself verifies IDs, then nations across the world will start asking for those IDs and Reddit will have to furnish them.

Every community will have to decide whether the benefits of anonymity outweigh the risks. On the whole, I think anonymity has been a net negative for online community, but I understand that others may disagree. They'll still be free to join anonymous communities. But I suspect that large-scale, verified communities will ultimately be the norm, because for everyday use people will prefer them. Obviously, they work better in countries with healthy, functional liberal democracies.

>Verification does not stop harassment or bullying.

I can say this from experience moderating, as well as research. I'll take the easy case of real world bullying first - people know their bullies here. It does not stop bullying. Attackers tend to target groups/individuals that cannot fight back.

Now you asked for evidence that either statement was true, but then spoke about reducing harassment. These are not the same things. This 2013 paper studied incivility in anonymous and non-anoymous forums [1] . Incivility was lower in the case where identities were exposed, however this did not stop incivility.

The Australian ESafety commisioner has this to say as well: > owever, it is important to note that preventing or limiting anonymity and identity shielding online would not put a stop to all online abuse, and that online abuse and hate speech are not always committed by anonymous or fake account holders. [2]

Now to bring GenAI into the mix - the cost of spoofing a selfie has now gone down quite a bit, if not made it very cheap. Verification of ID will require being able to manually inspect an individual. This means the costs of verification are VERY labor intensive. India has a biometric ID program, and we are talking about efforts on those scales. And even then, it doesn't stop false IDs from being created.

Combining these various points, ditching anonymity would necessitate a large effort in verifying all users, killing off the ability for people to connect on anonymous forums (LGBTQ communities for example) for some reduction in harassment.

This also assumes that people rigorously check your ID when its being used, becuase if there is any gap or loophole, it will be used to create fake IDs to spam, harass or target people.

[1] https://www.researchgate.net/publication/263729295_Virtuous_...

[2] https://www.esafety.gov.au/industry/tech-trends-and-challeng...

> On the whole, I think anonymity has been a net negative for online community, but I understand that others may disagree.

I would like to agree with you, but having moderated content myself - people do not give a shit and will say whatever they want, because they damned well want you to know it.

Take misinformation; I used to think the volume of misinformation was the issue. It turns out that misinformation amplificaiton is more driven by partisan or momentar political needs, than our improved ability to churn out quantities of it.

  • Verification of identity has to be in person and it can be reliable and secure in general. Many countries in the world have a process and infrastructure for that, they mainly need to open verification API to third parties. BundID in Germany, GosUslugi in Russia, Diia in Ukraine (built with support of USAID!) etc.

    That said, anonymity is not necessary condition of a safe environment. Pseudonymity with sufficient protections against disclosure will work just fine. If a platform only knows that there’s a real person behind a nickname and they can reliably hold that person accountable it is enough. They don’t need a name. Just some identifier from identity provider.

    As for misinformation, is not a moderation issue and should not be solved by platforms. You cannot and should not suppress political messages, they will find their way. It’s the matter of education and political systems and counter-propaganda. The less efficient are the former, the more efficient is propaganda in general.

  • >I'll take the easy case of real world bullying first - people know their bullies here. It does not stop bullying. Attackers tend to target groups/individuals that cannot fight back.

    But in an online forum where the bully is known and can be banned/blocked permanently, everyone can fight back.

    >Now you asked for evidence that either statement was true, but then spoke about reducing harassment. These are not the same things.

    Of course there will continue to be harassment on the margins, where people could reasonably disagree about whether it's harassment. But even in those cases, the victims can easily and permanently block any interaction with the harasser. Which removes the gratification that such bad actors seek.

    >Incivility was lower in the case where identities were exposed, however this did not stop incivility.

    I think we're getting hung up on what 'stop' means in this context. If I have 100 incidences per day of incivility before verification, and only 20/day after, then I've stopped 80 cases/day. Have I stopped all incivility? No, but that was not the intent of my statement. I think it will drastically reduce bullying and misinformation, but there will always be people who come into the new forum and push the envelope. But they won't be able to accumulate, as they are rapidly blocked and eventually banned. The vast majority of misinformation and bullying comes from a small number of repeat offenders. Verification prevents the repetition.

    Have you moderated in a verified context, where a banned actor cannot simply create a new account? I feel like there are very few such platforms currently, because as you point out, it's expensive and so for-profit social media prefers anonymity. But if we're all spending a significant part of our lives online, and using these platforms as a source of critical information, it's worth it.

    One context where everyone is verified is in a typical business---your fellow employees, once fired, cannot simply create another company email account and start over. And bad apples who behave anti-socially are weeded out. People generally behave civilly to each other. So clearly such a system can and does work, most of us see it on a daily basis.

    • >But in an online forum where the bully is known and can be banned/blocked permanently, everyone can fight back.

      Firstly, please acknowledge that knowing the identity of the attacker doesn’t stop bullying. Ignoring or papering over it, deprives arguments of supports that are required to be useful in the real world.

      There is a reason I pointed out that it doesn’t stop harassment, because it disproves the contention that anonymity is the causal force for harassment.

      The argument for reducing harassment via anonymity is supported, but it results in other issues. In a fully de-anonymized national social media platform, people will target minorities, immigrants and nations from other countries. I.E whatever is the acceptable jingoism and majority view point. Banning such conversation will put the mods in the cross hairs.

      And yes, if it reduced harassment by 80%, that would be something. However the gains are lower (from that paper, it seemed like a 12% difference).

      ——-

      I am taking great pains to separate out misinfo from bullying / harassment.

      For misinformation, the first section, about minute 3 to minute 4, where Rob Faris speaks, does a better job of articulating the modern mechanics : https://www.youtube.com/watch?v=VGTmuHeFdAo

      The larger issue for misinformation, is that it has utility for certain political groups and users today. It allows them the ability to create narratives and political speech faster and more effectively.

      Making a more nuanced point will end up with me elaborating on my personal views on market capture for ideas. The shortest point i can make about misinformation, journalism and moderation is this:

      Factual accuracy is expensive, and an uncompetitive product, when you are competing in a market that is about engagement. Right now, I don’t see journalism, science, policy - slow, fact and process limited systems, competing with misinformation.

      Solving the misinformation problem will require figuring out how to create a fair fight / fair competition, between facts and misinformation.

      Since misinformation purveyors can argue they have freedom of speech, and since they are growing increasingly enmeshed with political power structures, simple moves like banning are shifting risk to moderators and platforms - all of whom have a desire to keep living their lives without having to be harassed.

      For what it’s worth, I would have argued the same thing as you until a few scant months ago. The most interesting article I read showed that the amount of misinformation that is consumed is a stable % of total content consumed. Indicating that while supply and production capacity of misinformation may increase, the demand is limited. This coupled with the variety of ways misinformation can be presented, and the ineffectiveness of fact checkers at stopping uptake, forced a rethinking of how to effectively address all that is going on.

      ——-

      I don’t have information on how behavior is in a verified context. I have some inklings of seeing this at some point, and eventually being convinced this was not a solution. I’ll have to see if I end up finding something.

      I think one of the issues is that verification is onerous, and results in a case where you can lose your ID and then have all the real world challenges that come with it, while losing the benefits that come from being online. There’s a chilling effect on speech in both directions. Anonymity was pretty critical to me being able to even learn enough to make the arguments I am making, or to converse with people here.

      If theres a TLDR to my position, it’s that the ills we are talking about are symptoms of dysfunction in how our ecosystem is behaving. So these solutions will only shift the method by which they are expressed. I would agree that it’s a question of tradeoffs. To which my question is what are we getting for what ground we are conceding.

      1 reply →