← Back to context

Comment by Lerc

1 day ago

Part of me thinks that if the case against social media was stronger, it would not be being litigated on substack.

A lot of things suck right now. Social media definitely give us the ability to see that. Using your personal ideology to link correlations is not the same thing as finding causation.

There will be undoubtedly be some damaging aspects of social media, simply because it is large and complex. It would be highly unlikely that all those factors always aligned in the direction of good.

All too often a collection of cherry picked studies are presented in books targeting the worried public. It can build a public opinion that is at odds with the data. Some people write books just to express their ideas. Others like Jonathan Haidt seem to think that putting their efforts into convincing as many people as possible of their ideology is preferable to putting effort into demonstrating that their ideas are true. There is this growing notion that perception is reality, convince enough people and it is true.

I am prepared to accept aspects of social media are bad. Clearly identify why and how and perhaps we can make progress addressing each thing. Declaring it's all bad acts as a deterrent to removing faults. I become very sceptical when many disparate threads of the same thing seem to coincidentally turn out to be bad. That suggests either there is an underlying reason that has been left unstated and unproven or the information I have been presented with is selective.

> Part of me thinks that if the case against social media was stronger, it would not be being litigated on substack.

It's litigated all over and has been for a decade.

Australia for example has set an age limit of 16 to have social media. France 15. Schools or countries are trying various phone bans. There's research into it. There are whistleblowers telling about Facebook's own research they've suppressed as it would show some of their harm.

Perhaps you spend too much time on social media?

  • I am aware that laws have been passed on a wide range of issues against expert advice. Whether it be protecting the right to assault children, punishing addicts instead of preventing harm, or cutting children off from their most used method of first contact with mental health-care

    Since you bring up the Australian law as an example I shall check the expert opinion on that.

    For the second time in a week, I find myself in the peculiar position of seeing our research misinterpreted and used to support specific (and ill-advised) policy - this time by the Australian government to justify a blanket social media ban for those under 16.

    https://www.linkedin.com/posts/akprzybylski_the-communicatio...

    This open-letter, signed by over 140 Australian academics, international experts, and civil society organisations, addresses the proposal to ‘ban’ children from social media until the age of 16. They argue that a ‘ban’ is too blunt an instrument to address risks effectively and that any restrictions must be designed with care.

    https://apo.org.au/node/328608

    https://ccyp.wa.gov.au/news/anzccga-joint-statement-on-the-s...

    https://humanrights.gov.au/about/news/proposed-social-media-...

  • You’re strengthening OP’s point instead of undermining it.

    The “some governments banned it for kids” argument is an appeal to authority, a logical fallacy.

    The actions of tech-reactionist leftist governments absolutely do not constitute sound science or evidence in this matter.

    And if you’re claiming the French government only makes government policy based on sound data, I will point you to their currently unraveling government over the mathematically impossible social pension scheme they’ve created.

    • Responding to the point "it's [only] litigated on substack", things like government bans are relevant counter-examples

      The bans might be unfounded or well founded, you might agree with them or not, but clearly the idea that social media might be bad has spread beyond substack

      1 reply →

    • Your argument contains the fallacy fallacy, a logical fallacy in which one wrongly cites an informal fallacy in order to discredit a valid argument.

      The actions of several democratic governments is evidence that there is enough popular support for these actions to argue for a broader trend. And before you try for a gotcha, I am well aware that a democratic government can enact regulations without a direct vote proving that a majority of people support such an action. But inasmuch as a government reflects the will of the governed, etc etc etc.

      3 replies →

  • > set an age limit of 16 to have social media

    This just shows how futile it is. How do you actually stop someone from using social media? If a 15 year old signs up for Mastodon what is Australia going to do about it?

    • I'm guessing it's mostly useful as a guide for parents, but I haven't seen any hard data

      It shows it's not just a debate on substack though

      1 reply →

I feel like regardless of all else, the fact of algorithmic curation is going to be bad, especially when it's contaminated by corporate and/or political interests.

We have evolved to parse information as if its prevalence is controlled by how much people talk about it, how acceptable opinions are to voice, how others react to them. Algorithmic social media intrinsically destroy that. They change how information spreads, but not how we parse its spread.

It's parasocial at best, and very possibly far worse at worst.

  • No doubt the specific algorithms used by social media companies are bad. But what is "non-algorithmic" curation?

    Chronological order: promotes spam, which will be mostly paid actors. Manual curation by "high-quality, trusted" curators: who are they, and how will they find content? Curation by friends and locals: this is probably an improvement over what we have now, but it's still dominated by friends and locals who are more outspoken and charismatic; moreover, it's hard to maintain, because curious people will try going outside their community, especially those who are outcasts.

    EDIT: Also, studies have shown people focus more on negative (https://en.wikipedia.org/wiki/Negativity_bias) and sensational (https://en.wikipedia.org/wiki/Salience_(neuroscience)#Salien...) things (and thus post/upvote/view them more), so an algorithm that doesn't explicitly push negativity and sensationalism may appear to.

    • > Chronological order: promotes spam, which will be mostly paid actors.

      If users chose who to follow this is hardly a problem. Also classical forums dealt with spam just fine.

      7 replies →

    • > Also, studies have shown people focus more on negative (https://en.wikipedia.org/wiki/Negativity_bias) and sensational (https://en.wikipedia.org/wiki/Salience_(neuroscience)#Salien...) things (and thus post/upvote/view them more), so an algorithm that doesn't explicitly push negativity and sensationalism may appear to.

      This is exactly why it's a problem. It doesn't even matter whether the algorithm is trained specifically on negative content. The result is the same: negative content is promoted more because it sees more engagement.

      The result is more discontent in society, people are constantly angry about something. Anger makes a reasonable discussion impossible which in turn causes polarisation and extremes in society and politics. What we're seeing all over the world.

      And the user sourced content is a problem too because it can be used for anyone to run manipulation campaigns. At least with traditional media there was an editor who would make sure fact checking was done. The social media platforms don't stand for the content they publish.

      2 replies →

    • I've been curating my own feeds manually for decades now. I choose who to follow, and actively seek out methods of social media use that are strictly based on my selections and show things in reverse chronological order. Even Facebook can do thus with the right URL if you use it via the web[1].

      You start with almost nothing on a given platform but over time you build up a wide variety of sources that you can continue to monitor for quality and predictive power over time.

      [1] https://www.facebook.com/?sk=h_chr

    • > But what is "non-algorithmic" curation?

      Message boards have existed for a very long time, maybe you're too young to remember, but the questions you're raising have very obvious answers.

      They're not without issues, but they have a strong benefit: everyone sees the same thing.

  • I have wondered if it's not algorithmic curation per-se that is the problem, but personalised algorithmic curation.

    When each person is receiving a personalised feed, there is a significant loss of common experience. You are not seeing what others are seeing and that creates a loss of a basis of communication.

    I have considered the possibility that the solution might be to enable many areas of curation but in each domain the thing people see is the same for everyone. In essence, subreddits. The problem then becomes the nature of the curators, subreddits show that human curators are also not ideal. Is there an opportunity for public algorithm curation. You subscribe to the algorithm itself and see the same thing as everyone else who subscribes sees. The curation is neutral (but will be subject to gaming, the fight against bad actors will be perpetual in all areas).

    I agree about the tendency for the prevalence of conversation to influence individuals, but I think it can be resisted. I don't think humans live their lives controlled by their base instincts, most learn to find a better way. It is part of why I do not like the idea of de-platforming. I found it quite instructional when Jon Stewart did an in-depth piece on trans issues. It made an extremely good argument, but it infuriated me to see a few days later so many people talking about how great it was because Jon agreed with them and he reaches so many people. They completely missed the point. The reason it was good is because it made a good case. This cynical "It's good if it reaches the conclusion we want and lots of people" is what is destroying us. Once you feel like it is not necessary to make your case, but just shout the loudest, you lose the ability to win over people who disagree because they don't like you shouting and you haven't made your case.

    • > the solution might be to enable many areas of curation but in each domain the thing people see is the same for everyone.

      Doesn't this already happen to some extent, with content being classified into advertiser-friendly bins and people's feeds being populated primarily by top content from within the bins the algorithm deems they have an interest in?

      > Once you feel like it is not necessary to make your case, but just shout the loudest, you lose the ability to win over people who disagree because they don't like you shouting and you haven't made your case.

      To some extent, this is how human communication always worked. I think the biggest problem is that the digital version of it is sufficiently different from the natural one, and sufficiently influenceable by popular and/or powerful actors, that it enables very pathological outcomes.

      1 reply →

  • Social media should be liable for the content that their automatic curation put forward. If a telecom company actively gives your number to scammers to call you up, they should not hide behind the argument that it is not them scamming you, but someone else. Applying regular anti-fraud and defamation laws will probably put an end to algorithmic curation.

It's increasingly discussed in traditional media too so let's toss out that first line glib dismissal.

More and more people declaring it's net-negative is the first step towards changing anything. Academic "let's evaluate each individual point about it on its own merits" is not how this sort of thing finds political momentum.

(Or we could argue that "social media" in the Facebook-era sense is just one part of a larger entity, "the internet," that we're singling out.)

  • > More and more people declaring it's net-negative is the first step towards changing anything.

    I accept that "net-negative" is a cultural shorthand, but I really wish we could go beyond it. I don't think people are suddenly looking at both sides of the equation and evaluating rationally that their social media interactions are net negative.

    I think what's happening is a change in the novelty of social media. That is, the the net value is changing. Originally, social media was fun and novel, but once that novelty wears away it's flat and lifeless. It's sort of abstractly interesting to discuss tech with likeminded people on HN, but once we get past the novelty, I don't know any of you. Behind the screen-names is a sea of un-identifiable faces that I have to assume are like-minded to have any interesting discussions with, but which are most certainly not like me at all. Its endless discussions with people who don't care.

    I think that's what you're seeing. A society caught up in the novelty, losing that naive enjoyment. Not a realization of met effects.

  • >It's increasingly discussed in traditional media too so let's toss out that first line glib dismissal.

    Traditional media is the absolute worst possible source for anything related to social media because of the extreme conflict of interest. Decentralised media is a fundamental threat to the business model of centralised media, so of course most of the coverage of social media in traditional media will be negative.

    • Unfortunately most of what people understand as "social media" is not decentralized, and most of the biggest names on Substack in particular come directly out of "traditional media", which is exactly why it's not a real alternative. Substack is just another newspaper except now readers have to pay for every section they want to read.

      4 replies →

    • I wish to quibble with you on this as there is a love/hate relationship between the conventional media and social media.

      The mainstream media have several sources, including the press releases that get sent to them, the newswires they get their main news from and social media.

      In the UK the press, in particular, the BBC, were early adopters of Twitter. Most of the population would not have heard of it had it not been for the journalists at the BBC. The journalists thought it was the best thing since the invention of the printing press. Latterly Instagram has become an equally useful source to them and, since Twitter became X, there is less copying and pasting tweets.

      The current U.S. President seems capable of dictatorship via social media, so following his messages on social media is what the press do. I doubt any journalist has been on whitehouse.gov for a long time, the regular web and regular sources have been demoted.

  • "net-negative" sounds like a rigidly defined mathematically derived result but it's basically just a vibe that means "I hate social media more than I like it."

    • I'm struggling to understand your point, especially since the conclusion you posit is rather glib and dismissive.

      Net-negative is not quantifiable. But it is definitely qualifiable.

      I don't think you have to think of things in terms of "hate it more than I like it" when you have actual examples on social media of children posting self-harm and suicide, hooliganism and outright crimes posted for viewership, blatant misinformation proliferation, and the unbelievable broad and deep affect powerful entities can have on public information/opinion through SM.

      I think we can agree all of these are bad, and a net-negative, without needing any mathematic rigor.

      4 replies →

  • What’a being discussed in the traditional media has no value anymore because it’s a dead medium, inhabited by dinosaurs.

  • I did not consider it a glib dismissal, and I would not consider traditional media an appropriate avenue to litigate this either. Trial by media is a term used to describe something that generally think shouldn't occur.

    The appropriate place to find out what is and isn't true is research. Do research, write papers, discuss results, resolve contradictions in findings, reach consensus.

    The media should not be deciding what is true, they should be reporting what they see. Importantly they should make clear that the existence of a thing is not the same thing as the prevalence of a thing.

    >Academic "let's evaluate each individual point about it on its own merits" is not how this sort of thing finds political momentum.

    I think much of my post was in effect saying that a good deal of the problem is the belief that building political momentum is more important than accuracy.

    • Weren’t you, in your initial post, suspicious that the research process was settling on a pessimistic consensus view? Figuring that, because most every formal study is coming up negative (or “no effect supported”), it must be that the research is selective and designed to manipulate? And that a phenomenon can’t exhibit a diversity of uniformly bad effects without “an underlying reason that has been left unstated and unproven”?

      I don’t know how I’d state or prove a single underlying reason why most vices are attractive-while-corrosive and still, on the whole, bad. It feels like priests and philosophers have tried for the whole human era to articulate a unified theory of exactly why, for example, “vanity is bad”. But I’m still comfortable saying gambling feels good and breaks material security, lust feels good and breaks contentment (and sometimes relationships), and social media feels good and breaks spirits.

      I certainly agree that “social media” feels uncomfortably imprecise as a category—shorthand for individualized feeds, incentives toward vain behavior, gambling-like reinforcement, ephemerality over structure, decontextualization, disindividuation, and so on; as well as potentially nice things like “seeing mom’s vacation pics.”

      If we were to accept that social media in its modern form, like other vices, “feels good in the short term and selectively stokes one’s ego,” would that be enough of a positive side to accept the possibility for uniformly negative long-run effects? For that matter, and this is very possible—is there a substantial body of research drawing positive conclusions that I’m not familiar with?

    • > The appropriate place to find out what is and isn't true is research. Do research, write papers, discuss results, resolve contradictions in findings, reach consensus.

      Few hot-button social issues are resolved via research, and I'm not sure they should be. On many divisive issues in social sciences, having a PhD doesn't shield you from working back from what you think the outcome ought to be, so political preferences become a pretty reliable predictor of published results. The consensus you get that way can be pretty shoddy too.

      More importantly, a lot of it involves complex moral judgments that can't really be reduced to formulas. For example, let's say that on average, social media doesn't make teen suicides significantly more frequent. But are OK with any number of teens killing themselves because of Instagram? Many people might categorically reject this for reasons that can't be dissected in utilitarian terms. That's just humanity.

    • > The media should not be deciding what is true, they should be reporting what they see.

      Largely I don't think the media has been dictating anything. They've just been reporting on the growing body of evidence showing that social media is harmful.

      What you'd call "trial by media" is just spreading awareness and discussion of the evidence we have so far which seems like a very good thing. Social media moves faster than scientific consensus, and there's a long history of industry doing everything they can to slow that process down and muddy the waters. We've seen facebook doing exactly that already by burying child safety research.

      A decade or more of "Do thing, say nothing" is not a sound strategy when the alternative is letting the public know about the existing research we have showing real harms and letting them decide for themselves what steps to take on an individual level and what concerns to bring to their representatives who could decide policy to mitigate those harms or even dedicate funding to further study them.

    • There's plenty of research. Plenty. None of it is positive.

      Summaries with links here. https://socialmediavictims.org/effects-of-social-media/

      It's really not hard to confirm this.

      The problem isn't that "building political momentum is more important than accuracy", it's that social media is a huge global industry that pumps out psychological, emotional, and political pollution.

      And like all major polluters, it has a very strong interest in denying what it's doing.

      1 reply →

I don't think reasoning needs to be that complex. Addictive things are harmful, social media is design to be addictive (and is increasing). There is a correlation of higher addictiveness with harm. Children in particular are vulnerable for addictive things. So given the above, the expectation for social media which is highly addictive is that it would be highly harmful, unless there are clear reasons that it's not harmful.

> I am prepared to accept aspects of social media are bad. Clearly identify why and how and perhaps we can make progress addressing each thing.

Companies intentionally design social media to be as addictive as possible, which should be enough to declare them as bad. Should we also identify each chemical in a vape and address each one individually as well before banning them for children? I think such a ban for social media would probably be overkill, but it should not be controversial to ban phone use in school.

I'm with you on the skepticism, but I also think the underlying point is worth acknowledging:

Social media represents a step change in how we consume news about current events. No longer are there central sources relied on by huge swaths of the population. Institutions which could be held accountable as a whole and stood to lose from poor reporting. Previous behemoths like NYT, WaPo, Bloomberg are now comparatively niche and fighting for attention. This feels so obvious it's not necessary to litigate, but if someone has statistics to the contrary, I'll be happy to look deeper and re-evaluate.

I agree, one should not immediately succumb to fear of the new. At the same time, science is slow by design. It takes years to construct, execute and report on proper controlled studies. Decades to iterate and solidify a holistic analysis. In the mean time, it seems naive to run forward headlong, assuming the safest outcome. We'll have raised a generation or two before we can possibly reach analytical confidence. Serious irreparable damage could be done far before we have a chance to prove it.

> I am prepared to accept aspects of social media are bad. Clearly identify why and how

That has been done over and over again, but as long as law makers and regulators remain passive, nothing will improve.

There a lot of money in social media, literally hundreds of billions of dollars. I expect the case against it will continue to grow, like the case against cigarettes did.

I will say this, and this is anecdotal, but other events this week have been an excellent case study in how fast misinformation (charitably) and lies (uncharitably) spread across social media, and how much social media does to amp up the anger and tone of people. When I open Twitter, or Facebook, or Instagram, or any of the smaller networks I see people baying for blood. Quite literally. But when I talk to my friends, or look at how people are acting in the street, I don't see that. I don't see the absolute frenzy that I see online.

If social media turns up the anger that much, I don't think it's worth the cost.

  • >There a lot of money in social media, literally hundreds of billions of dollars. I expect the case against it will continue to grow, like the case against cigarettes did.

    I don't think it follows that something making money must do so by being harmful. I do think strong regulation should exist to prevent businesses from introducing harmful behaviours to maximise profits, but to justify that opinion I have to believe that there is an ability to be profitable and ethical simultaneously.

    >events this week have been an excellent case study in how fast misinformation (charitably) and lies (uncharitably) spread across social media

    On the other hand The WSJ, Guardian, and other media outlets have published incorrect information on the same events. The primary method that people had to discover that this information was incorrect was social media. It's true that there was incorrect information and misinformation on social media, but it was also immediately challenged. That does create a source of conflict, but I don't think the solution is to accept falsehoods unchallenged.

    If anything education is required to teach people to discuss opposing views without rising to anger or personal attacks.

    • > I don't think it follows that something making money must do so by being harmful.

      My point isn't that it's automatically harmful, simply that there is a very strong incentive to protect the revenue. That makes it daunting to study these harms.

      > On the other hand The WSJ, Guardian, and other media outlets have published incorrect information on the same events. The primary method that people had to discover that this information was incorrect was social media.

      I agree with your point here too, and I don't think the solution is to completely stop or get rid of social media. But, the problem I see is there are tons of corners of social media where you can still see the original lies being repeated as if they are fact. In some spaces they get challenged, but in others they are echoed and repeated uncritically. That is what concerns me - long debunked rumors and lies that get repeated because they feel good.

      > If anything education is required to teach people to discuss opposing views without rising to anger or personal attacks.

      I think many people are actually capable of discussing opposing views without it becoming so inflammatory... in person. But algorithmic amplification online works against that and the strongest, loudest, quickest view tends to win in the attention landscape.

      My concern is that social media is lowering people's ability to discuss things calmly, because instead of a discussion amongst acquaintances everything is an argument is against strangers. And that creates a dynamic where people who come to argue are not arguing against just you, but against every position they think you hold. We presort our opponents into categories based on perceived allegiance and then attack the entire image, instead of debating the actual person.

      But I don't know if that can fixed behaviorally, because the challenge of social media is that the crowd is effectively infinite. The same arguments get repeated thousands of times, and there's not even a guarantee that the person you are arguing against is a real person and not just a paid employee, or a bot. That frustration builds into a froth because the debate never moves, it just repeats.

      4 replies →

  • > If social media turns up the anger that much, I don't think it's worth the cost.

    It doesn't. It's just that when people can publish whatever with impunity, they do just that.

    Faced with the reality of what they're calling for they would largely stop immediately.

    I believe the term for that is "keyboard warrior".

seems that you're the guy that likes to be against the norm, even if you're wrong. Social media being controlled by corporations and algorithms built to create addiction should be enough, unless you have other motives to ignore all this.

The Nepalese just elected a govenment on Discord. Who says we can’t litigate on substack? Hell, it might be the future.

There are a lot biochemical hypotheses for why social media is unhealthy that I personally buy into.

All this is good except that to achieve any kind of actual political action in this actual universe in which we live, we must use rhetoric. Asking people to be purely rational is asking them to fail to change anything about the way our culture works.

> Part of me thinks that if the case against social media was stronger, it would not be being litigated on substack.

For all we know there are millions who have withdrawn and are making the case outside of social media. Or living the case.

This reply seems a bit fish-in-water to me.

I think the problem with social media is it’s easy to exploit, all the most powerful people in the world perceive themselves to benefit from social media. This isn’t true for something like smoking.

The problem is this kind of long form "thinks" miss the basics and even uses polarising denialist phrases like "fear mongering"

There is a an obvious incoherence and even misreasoning present in the people most ruined by the new media.

For example, you might want to drive the risk of something to zero. To do that, you need to calmly respond with policy every bad event of that type with more restrictions at some cost. This should be uncontentious to describe yet again and again the pattern is to mistake the desires, the costs and the interventions.

I can't even mention examples of this without risking massive karma attacks. That is the state of things.

I used to think misreasoning was just something agit prop accounts did online but years ago started hearing the broken calculus being spoken by IRL humans.

We need a path forward to make people understand they should almost all disagree but they MUST agree on how they disagree else they don't actually disagree.They are just barking animals waiting for a chance to attack.

There's a concerted assault on social media from the powers that be because social media is essentially decentralised media, much harder for authoritarians to shape and control than centralised media. Social media is why the masses have finally risen up in opposition to what Israel's been doing in Gaza, even though the genocide has been going on for over half a century: decentralised information transmission allowed people to see the reality of what's really going on there.

  • It's not decentralized at all. It represents a total commercialization of the town square.

    The situation you reference with regard to Israel/Gaza is only possible because TikTok is partially controlled by Chinese interests. But it also goes to show that TikTok could have easily been banned or censored by western governments. Just kick them off the App Stores and block the servers. For example, there is no support Net Neutrality in the USA that would defend them if the government wanted to quietly throttle their network speed.

    Social media as it exists now is not decentralized in any meaningful capacity.