Comment by whyenot

4 days ago

As a middle aged (gen x) woman, my facebook feed is pretty good. It's filled with posts from friends and interest groups that I am a part of. The reason I no longer use FB has nothing to do with the feed, it's because Mark Zuckerberg is an awful person, and I refuse to use his product. The cognitive dissonance is great here, because I still use WhatsApp; it's the best way to stay in contact with my relatives in Europe, and I still use IG, albeit mostly for work, and sparingly.

I'm still a FB user even though most friends and relatives have disengaged due to toxicity. But what I've noticed consistently is that any group on FB that has more than 1000 members will end up surfacing so much toxic sentiment that I have to unsubstantiated. I'm talking about innocuous fields such as the local road conditions. That one became full of rants about out of state drivers, drivers who don't understand English, people posting license plates of bad drivers, etc. This has led me to a theory that humans just can't behave nicely beyond some threshold group size.

  • > This has led me to a theory that humans just can't behave nicely beyond some threshold group size.

    I think what happens is that the risk of including a critical amount of "toxics" (lacking a better word) such that they can keep a conversation going, increases by FB group size. Without actice moderators it doesn't take much.

    • I think it is important to remember that only a tiny, tiny fraction of most facebook groups is actually posting, commenting, or even viewing the group at any given moment. Most people who view don't post/comment. (True of reddit and other social media as well.)

      And the thing about poorly moderated groups (especially on platforms with rage-boosting algorithms) that let assholes go off without consequences is: the people who both a) actually look at the group ever and b) aren't assholes either leave entirely, stop looking at the group, and stop posting/commenting to the group (if they ever did in the first place). They go find places to hang out where there aren't a bunch of assholes. Nobody wants to hang out with the assholes when they can easily just not.

      And at the same time, the assholes all gravitate to the same few places because they get kicked out of all the other places. Or if they don't get kicked out outright, they get shouted down or ignored, which they hate. So instead they congregate where they can get away with or get praised for saying whatever vile things they want.

    • The Dunbar number is 150 for humans but that only measures maintaining a group, maybe the behave nicely number is smaller.

  • > But what I've noticed consistently is that any group on FB that has more than 1000 members will end up surfacing so much toxic sentiment that I have to unsubstantiated.

    It depends on the group and how well it is moderated.

    I live in an area where everything depends on Facebook. There are multiple FB groups for the town, the largest of which has 80k members. Not perfect, but not toxic. The same in other similar groups.

    I am an admin of another with 30k members. It has a tight focus (exams and qualifications for home ed kids in the UK - GCSEs/IGCSEs mostly, but other things too), membership is only for parents of such kids (there are membership questions), the group is private, posts require approval, irrelevant comments get deleted, repeat offenders get kicked out. We do not have a lot of problems (some attempts at spam by tutors, but they get kicked out).

  • I think after a certain group size people feel immune or that their alternative thought might have a better chance of landing with someone.

  • > This has led me to a theory that humans just can't behave nicely beyond some threshold group size.

    I think you're generalizing far too broadly. The problem you're describing is more-or-less exclusively a problem with online, open-membership groups.

    Consider: if the groups you describe were in-person groups, these ranters would constantly be getting disengaged/off-put/disgusted reactions from the "silent majority" of the people in the group. And just these reactions — together with a lack of any positive engagement — would, almost always, be enough to make them stop or go somewhere else.

    (Or, to put a finer point on that: "annoyed, judgemental silence, and then turning away / back to the person you were talking to" would always put off the vast majority of people, with just a few — people who have trouble understanding non-verbal signals — persisting because they aren't "getting the message." And in an in-person context, these few would still eventually be taken aside and given a talking-to, because if they're butting into other in-person conversations with this behavior, they're being far more disruptive than "random new conversation threads" tend to be felt as. Even though "random new conversation threads" can kill a group just as dead.)

    The problem with decorum / respect-for-purpose in unmoderated online open-membership groups seems to mostly stem from the fact that people underestimate the importance of non-verbal signals in moderating/regulating behavior. And so there is a dearth of such signals available in such groups. Our brains didn't evolve to play the game of socializing without these signals, any more than ants evolved to coordinate without pheremones. So many people's brains begin to play the game in degenerate / anti-social ways.

    From what I've been able to gather, from personal interactions with many people who admit to being "Internet trolls" at some point in their lives... their behavior was almost never intentional maliciousness/active-disregard-for-others on their part. It's rather an emergent behavior — something they "just ended up doing" — given a lack of (non-verbal-signal-alike) calibrating feedback.

    And why is there so little non-verbal-signal-alike communication online?

    Well, for one thing, we often aren't even aware we're giving off such signals; and so, if we need to consciously choose to communicate them (as we do in online contexts), then we simply fail to do so, because the majority of these signals never even rise to our conscious attention as something to be communicated.

    And even when we do become aware of them, we often don't feel them to be important enough to be "worth" going to the effort of translating into some more conscious/explicit/non-subtextual form of communication.

    And then, even when a strong desire to communicate a nonverbal signal does bubble up within us... most online chat/forum systems are horrible at transmitting such signals with any degree of fidelity, when they transmit them at all. Especially the kinds of signals used for intra-group behavior regulation.

    Facebook, for example, has reaction emojis on both posts and comments — but no reaction emoji that transmits a sentiment like "I disapprove of you saying this; please stop" (e.g. U+1F611 EXPRESSIONLESS FACE or U+1FAE4 FACE WITH DIAGONAL MOUTH). Rather, the only reaction emoji available are those meant to react sympathetically to the emotive content of the post/comment — e.g. with anger, sadness, etc. (People do try to use the "anger" reaction to express disapproval of posts; but when the content itself is often "ragebait" / meant to evoke anger, the poster won't necessarily understand that these reactions are being directed at them, rather than at their post.)

    Further, no chat system or forum I'm aware of has participant-visible signals of "detach rate" — i.e. there's no way for people to know when others are clicking on their posts, reading one line, doing a 180 and running away as fast as they can. (YouTube videos expose this metric to their creators; I think it's actually very helpful for them. It could do with being implemented far more widely.)

    (And, to be a conspiracy theorist for a moment: I think, in both cases, this is probably intentional. The explicit purpose of signals that "regulate behavior", after all, is to make people engage less in certain anti-social behaviors. Making available any such tools, will therefore inevitably make any kind of platform-aggregate "engagement metrics" go down! If they were ever temporarily introduced, they'd have been quickly removed again with this justification.)

    • Great analysis. I do not think its conspiracy theorist to believe it to be intentional, or at least a result of KPIs.

      One thing I think you are missing is that in person groups are usually far smaller. Anything with 1,000 people would be organised and there would be rules of behaviour, moderation of discussion etc. Most often if something is that big, its mostly an audience.

      I think the other thing that happens in real life groups is that there is no community or real relationships. If you annoy people in real life it has consequences. In an FB group there are none.

      1 reply →

My Facebook feed is great, my X feed is great. I don't use Facebook and X because I like Mark Zuckerberg and Elon Musk but because I genuinely read interesting things and I interact with people I like.

That being said, I don't spend too much time on social networks because I have lots of other things to do.