Comment by derefr
4 days ago
> This has led me to a theory that humans just can't behave nicely beyond some threshold group size.
I think you're generalizing far too broadly. The problem you're describing is more-or-less exclusively a problem with online, open-membership groups.
Consider: if the groups you describe were in-person groups, these ranters would constantly be getting disengaged/off-put/disgusted reactions from the "silent majority" of the people in the group. And just these reactions — together with a lack of any positive engagement — would, almost always, be enough to make them stop or go somewhere else.
(Or, to put a finer point on that: "annoyed, judgemental silence, and then turning away / back to the person you were talking to" would always put off the vast majority of people, with just a few — people who have trouble understanding non-verbal signals — persisting because they aren't "getting the message." And in an in-person context, these few would still eventually be taken aside and given a talking-to, because if they're butting into other in-person conversations with this behavior, they're being far more disruptive than "random new conversation threads" tend to be felt as. Even though "random new conversation threads" can kill a group just as dead.)
The problem with decorum / respect-for-purpose in unmoderated online open-membership groups seems to mostly stem from the fact that people underestimate the importance of non-verbal signals in moderating/regulating behavior. And so there is a dearth of such signals available in such groups. Our brains didn't evolve to play the game of socializing without these signals, any more than ants evolved to coordinate without pheremones. So many people's brains begin to play the game in degenerate / anti-social ways.
From what I've been able to gather, from personal interactions with many people who admit to being "Internet trolls" at some point in their lives... their behavior was almost never intentional maliciousness/active-disregard-for-others on their part. It's rather an emergent behavior — something they "just ended up doing" — given a lack of (non-verbal-signal-alike) calibrating feedback.
And why is there so little non-verbal-signal-alike communication online?
Well, for one thing, we often aren't even aware we're giving off such signals; and so, if we need to consciously choose to communicate them (as we do in online contexts), then we simply fail to do so, because the majority of these signals never even rise to our conscious attention as something to be communicated.
And even when we do become aware of them, we often don't feel them to be important enough to be "worth" going to the effort of translating into some more conscious/explicit/non-subtextual form of communication.
And then, even when a strong desire to communicate a nonverbal signal does bubble up within us... most online chat/forum systems are horrible at transmitting such signals with any degree of fidelity, when they transmit them at all. Especially the kinds of signals used for intra-group behavior regulation.
Facebook, for example, has reaction emojis on both posts and comments — but no reaction emoji that transmits a sentiment like "I disapprove of you saying this; please stop" (e.g. U+1F611 EXPRESSIONLESS FACE or U+1FAE4 FACE WITH DIAGONAL MOUTH). Rather, the only reaction emoji available are those meant to react sympathetically to the emotive content of the post/comment — e.g. with anger, sadness, etc. (People do try to use the "anger" reaction to express disapproval of posts; but when the content itself is often "ragebait" / meant to evoke anger, the poster won't necessarily understand that these reactions are being directed at them, rather than at their post.)
Further, no chat system or forum I'm aware of has participant-visible signals of "detach rate" — i.e. there's no way for people to know when others are clicking on their posts, reading one line, doing a 180 and running away as fast as they can. (YouTube videos expose this metric to their creators; I think it's actually very helpful for them. It could do with being implemented far more widely.)
(And, to be a conspiracy theorist for a moment: I think, in both cases, this is probably intentional. The explicit purpose of signals that "regulate behavior", after all, is to make people engage less in certain anti-social behaviors. Making available any such tools, will therefore inevitably make any kind of platform-aggregate "engagement metrics" go down! If they were ever temporarily introduced, they'd have been quickly removed again with this justification.)
Great analysis. I do not think its conspiracy theorist to believe it to be intentional, or at least a result of KPIs.
One thing I think you are missing is that in person groups are usually far smaller. Anything with 1,000 people would be organised and there would be rules of behaviour, moderation of discussion etc. Most often if something is that big, its mostly an audience.
I think the other thing that happens in real life groups is that there is no community or real relationships. If you annoy people in real life it has consequences. In an FB group there are none.
> One thing I think you are missing is that in person groups are usually far smaller.
Yes, but — an online group with 1000 members isn't really equivalent to an in-person group with 1000 members. It's actually more equivalent in "activity" / "number of expected novel pairwise interactions" to an in-person group with, say, 150 members.
(Why? Because the "members" of an online group, as reported by most chat/forum systems, are just the number of people with access to the chatroom/forum, or who are subscribed to updates to the chatroom/forum, etc. Most of these people have never posted. Many more have only ever posted once. Whereas, in common parlance, you wouldn't really describe someone as a "member" of an in-person group, unless they actually regularly attend the group's in-person meetings. [And that goes double for formal in-person organizations, which often have membership fees or dues. Nobody bothers paying to maintain membership to these if they aren't intent on attending!] So the word "members" here really refers to two very different metrics: for online, the number of passive readers; for in-person, some upper bound on the number of people you might expect to encounter at the average in-person event. We need to do some unit conversions here in order to make valid comparisons!)
Let's say, for the sake of argument, that the average online group with 1000 "members" might have ~100 regular posters. (It's probably less, actually.) And let's also say that the average (geographically-based) in-person group with 150 "members", has events attended by ~100 people. And let's assume "regular posters" and "regular event attendees" are roughly equivalent in how they cause interactions that drive (dis)affection / (dis)engagement within the group.
I believe we both already agree that an in-person group where events regularly see ~100 attendees, tends to do just fine without rules of behavior / explicit moderation / etc.
And yet, it seems to me that an online group with "just" ~100 regular posters, almost always tends toward falling apart, unless it does have such rules, and moderation to enforce those rules.
That's the more specific, apples-to-apples-ish distinction that I had in my head in my GP post: that it's weird that when we take basically the same "level of expected interactions" from in-person + synchronous, to online + asynchronous, that it tends toward a different equilibrium state.
---
I do also agree with the lack of community / real relationships being a major driving factor. If you take a bunch of people who are already in the same community, and give them a closed-membership unmoderated online forum to speak in, the resulting interactions don't seem to tend toward awfulness/collapse nearly as badly.
But I would argue that this isn't just due to "consequences" (i.e. posters knowing they're impacting their position in the equivalent real-world community.)
Rather, I think a large part of what makes online forums "backed by" shared pre-existing communities more robust, is that the community provides its members with an implicit shared context for "recovering" an assumed set of nonverbal signals that "would go along with" others' textual wording choices... which in turn regulates behavior exactly as if those nonverbal signals were being explicitly communicated. People don't need to actually convey that they're frowning at you, if everyone in the community (including the poster!) knows exactly what subtextual meaning is carried by a reply of e.g. "Well bless your heart."
This is a testable proposition: it implies that closed-membership forums "bound to" a community offer no benefit, if 1. the community itself is open-membership and 2. new people join the community itself frequently enough that few community resources are being invested per new member on giving them a thorough enculturation into the community (incl. awareness of the community's wording-subtext equivalences.)
- So you would expect that, if there's an online community forum for e.g. a small village, where the only way to move there is to marry into an existing household there — then that forum will be robust and self-moderating, because every newcomer to that community gets a thorough dose of community enculturation.
- Whereas, if there's an online community forum for e.g. the congregation of a church in a particular urban neighbourhood of a city, where anyone can just rent an apartment in the neighbourhood and start attending the church... then that forum might be quite awful, despite every member being aware that what they say there will impact how the congregation sees them. Because there's no enculturative "speed limit" preventing absolute newcomers from immediately posting in that forum.