Comment by bigfishrunning

5 days ago

I feel like this is general knowledge for the past 5 or so years, but the real question is "What do we do about it?". Personally, I put real effort into not spending time being outraged online, but this is a societal ill that's bigger then I am...

"What do we do about it?"

Shut down the behavior with regulations or shut down the companies. Meta and TikTok have no natural right to exist if they are a net negative to society.

  • Specifically, I believe Section 230 protections shouldn't apply to algorithmicly promoted content. TikTok hosting my video isn't inherently an endorsement of what I'm saying, but proactively pushing that video to people is functionally equivalent even if you want to quible over dictionary definitions. These algorithms take these platforms from dumb content-agnostic pipes that deserve protections to editorial enterprises that should bear responsibility for what they promote.

    • There is a decent legal argument to be made that §230 doesn't immunize platforms for the speech of their algorithm, to the extent that said speech is different from the speech of the underlying content. (A simple, if absurd, example of this would be if I ran a web forum and then created a highlight page of all of the defamatory comments people posted, then I'm probably liable for defamation.)

      The problem of course is that it's difficult to disentangle the speech of algorithmic moderation from the speech of the content being moderated. And the minor issue that the vast majority of things people complain about is just plain First Amendment-protected speech, so it's not like the §230 protections actually matter as the content isn't illegal in the first place.

    • I don't think we even need to go that far. Just remove protection for paid advertisements. It's absurd that Meta cannot be held liable for the ads they promote when a newspaper can be held liable if they were to publish the same ad.

      2 replies →

    • Really nice to see someone else bringing this up. Algorithmic editorial decisions are still editorial decisions. I think ultimately search and other forms of selective content surfacing should not have ever been exempt. They were never carriers. I appreciate that this would make the web as we know it unusable. I think failing to tackle this problem has will also make the web unusable, and in a worse way.

      4 replies →

    • This seems the same as news organisations choosing which news to report on, but driven by user behaviour rather than the org's employees themselves.

  • oddly enough the TikTok referred to here was to be shut down in the US. But then the executive branch ignored the law while it could organize handing the company over to Larry Ellison instead. But these allegations date to when the company was fully under the control of ByteDance, and not US-regulated entities at all.

    • > oddly enough the TikTok referred to here was to be shut down in the US. But then the executive branch ignored the law while it could organize handing the company over to Larry Ellison instead

      Which should make people think twice when they call for government regulation on speech as a solution to content they don't want other people to see.

      The more you give the government power to control speech, the more they'll use those laws to further their own interests.

  • Wouldn’t we need to shut down all news outlets, all the twitters and all the newspapers then? They might not be on the toxic spectrum as meta/tiktok, but are very close

    • There are people in this thread directly calling for us to strip protections from search engines and force them to shut down.

      I think a lot of this discussion has become detached from reality and we’re just entertaining some people’s impossible fantasies about shutting down the internet and returning to the past.

      1 reply →

  • >> Meta and TikTok have no natural right to exist if they are a net negative to society.

    Exactly. And when we are done with them we will shut down Molson and Anheuser-Busch. Then we can go after the people who make selfy sticks. Then the company that owns that truck that cut me off last week. Basically, organization who i dislike should not be allowed to exist.

  • Regulating content that makes people enraged seems like a slippery slide towards regulating any kind of "unwanted" speech. I get regulating CSAM, calls for violence or really obvious bullying (serious ones like "kill yourself" to a kid), but regulating algorithms that show rage bait leaves a lot of judgement to the regulators. Obviously I don't trust TikTok or Meta at all, but I don't trust the current or the future governments with this much power.

    For example, some teen got radicalized with racist and sexist content. That's bad in my opinion, as I'm not a racist or a sexist. But should racist or sexist speech be censored or regulated? On what grounds? How do we know other unpopular (now or in the future) speech won't be censored or regulated in the future? Again, as much as I'm not a racist or sexist, I don't think the government should have a say in whether a company should be able to promote speech like "whites/blacks are X" or "men/women are Y". What's next? Should we regulate speech about religion (Christians/Muslims/atheists are Z) or ethics (anti-war people or vegans are Q) or politics or drugs or sex?

    The current situation is shitty, but giving too much power to regulators will likely make it way shittier. If not now, in the future, since passed regulations are rarely removed.

    • At least in the US the government can't regulate speech (for the most part). But what we could do is regulate recommendation algorithms or other aspects of the overall design in a way that's generalized enough to be neutral in regards to any particular speech. And such regulations don't need to apply to any entity below some MAU or other metric.

      Even just mandating interoperability would likely do since that would open up the floor to competitors. Many users are well aware of the issues but don't feel they have a viable alternative that satisfies their goals.

      6 replies →

    • > I get regulating CSAM, calls for violence or really obvious bullying (serious ones like "kill yourself" to a kid)

      I’ve reported videos that look like sexual exploitation, videos that call for violence and videos that promote hate (xyz people are cockroaches) and all I’ve gotten is that “it does not go against community guidelines” with a link to block the person who created them. So any concerns of “where do we draw the line” are in my opinion pointless because the bare minimum isn’t even being done.

      1 reply →

    • I'm sorry but are you saying it's hard to figure out what to do so let's do nothing? Banning racist and sexist content is not a slippery slope. It's just banning racist and sexist content, slope is only slippery because the salivating mouths of these social platforms grease them.

      Also, I don't think people are advocating censorship, they are advocating not promoting assholes. You can have your little blog and be racist on it all you want, but let's not give these people equivalent of nukes for communication.

      3 replies →

  • regulation will never happen because these are instruments to control the masses

    • All the more reason for regulation. If people catch on to the fact that they are being manipulated and abused by the platforms to "drive engagement" they might abandon them or spend less time on them. If the government regulates these platforms so that they are safer or at least less harmful people will feel better about using them giving the government a larger platform to use to control the masses.

      1 reply →

Tax and heavily regulate online advertising. The root of the problem is that it is very, very lucrative to drive engagement and until you get rid of the monetary incentive, the problem will never go away.

"Make the drug less good" likely isn't the answer. Nor is banning it.

What caused Gen Z to drink less than millenials? Maybe Gen Z has the answer.

  • yeah, it's called "smoking weed".

    • Technology, culture, legalization of pot, adtech, covid, there are a metric ton of factors that all had significant impact on both decreasing socialization and reduction in drinking. And lowering the birth rates, and the number of healthy relationships, healthy friendships, etc.

      I'm for legalizing all drugs, regulating the sale, ensuring quality and purity, and educating the public. Cognitive liberty is sacred - but the dip in drinking has a whole lot of causes.

      A healthier society would be more social and get out and drink more, I think.

    • Millennials love their weed, party drugs too, it took over Gen X drinking in some way.

      But I find Zoomers to be rather tame in terms of drinking, smoking, drugs, unsafe sex, etc... Few of the traditional vices, really.

  • Inflation, mostly. And a lot of us lack social skills so they don't have many friends, thus no reason to go out and get drunk.

    But like, when a pint is $12 and mixed drinks are $15+ sobriety starts looking more appealing.

    Source: Am gen Z.

  • Decades of science communication and real life examples of knowing (of) alcohol addicts

    • I'd wager how expensive it has gotten plus a year or two of lockdowns which lead to a whole generation of people not going out to get wasted as soon as they're legally allowed to had way more effect.

      Oh, and weed being increasingly legal to consume.

      1 reply →

    • Real life experience with alcoholics would at-best be constant over time, or be diminishing (since gen Z drinks less).

      Also seems like the science on whether science communication actual changes behavior doesn't point towards it being much of a cause here.

  • > What caused Gen Z to drink less than millenials?

    Social media addiction?

    • As one of said generation, I would chalk it up to instant communication creating innumerable shallow remote relationships that significantly replace time spent with others in person.

  • Gen Z drinks less because alcohol isn’t enough of a fix and hard drugs are way cheaper. The answer isn’t what you’re looking for.

It’s like asking how do you get people to stop drinking alcohol

As long as there are people who don’t acknowledge or care about the health effects it will exist. If that’s a plurality of your population then you have a fundamental population problem IF you are in the group who thinks it’s bad.

Aka every minority-majority split on every issue ever.

So the answer is: live in a society governed by science. Unfortunately none exist

  • > So the answer is: live in a society governed by science. Unfortunately none exist

    Science is a lagging indicator of reality. It is by definition conservative (in that it requires rigorous, repeatable data before it can label something as true). Because of that, there's usually a pretty substantial gap between human discovery and scientific consensus.

    Mindfulness was discovered, as an example, to be beneficial as far back as 500 BCE. It wasn't "proven" with science until 1979.

    Sometimes we just need to rely on lived experience to make important decisions, especially regulation. We can't always wait for science.

  • I drink, but I acknowledge and care about the health effects. I care more about how it makes me feel. Don't assume everyone who smokes or drinks alcohol or takes another type of drug just doesn't care. Why don't we ban dangerous sports like rock climbing or BASE jumping or MMA while we're at it?

  • We handled smoking pretty well by making it cost more and banning it in public places. If tiktok was banned from official app stores it would essentially go away.

  • It's like how do you get people to stop letting their kids drink alcohol.

    Everyone knows what the dangers of alcohol are now. We need to get reliable data one can base policy on and then let the public health system do their thing. Maybe not every health authority but enough of them to protect the species at large. Then we'll get social media out of schools, away from young people, vulnerable folks, etc.

> "What do we do about it?"

I'd suggest something like banning algorithmic amplification - your feed is posts of people you follow and nothing else. But that's not what will happen. What will happen is there will be [1] vague laws about preventing vague "harm", written to give legal teeth to the Overton window. Not in those words, but companies that would go against it will be mired in lawfare, while those that comply will be allowed to grow.

And if you complain, they'll motte-and-bailey you - you're not in favor of "harm", are you? We're not an authoritarian speech police, we only seek to protect people from "harm".

[1] Or rather, are - see https://en.wikipedia.org/wiki/Online_Safety_Act_2023

My IG feed is largely taken over by congressional members videos, crazy $#!t the president (and his crew) says, and the keystone cops. And boy howdy is there a lot of rage inducing behavior going on.

I feel more informed than if I was only listening to NPR.

That said, I stay away from anything that’s produced—sound track, too many cuts/edits, talking head commentary. I guess in this context, if I’m going to be driven to emotional anxiety, it’s going to be from something that happened or something someone said, and not the internet’s interpretation.

You can’t “produce content” that I will watch _as news_. It has to be in some real way happening (with some deference to Rashomon).

The people who were voted to power (across the globe, not just the US) to do something about it are stuck getting their dopamine kicks posting garbage on the same platforms. It’s truly a terrible timeline we are in.

Regulate it. Laws, consequences, etc.

  • Laws appear to have fallen out of fashion. And a disturbing proportion of the loudest people like it. Then you have those who ought to know better but are attention-seeking, selfish assholes who somehow find it «interesting» or think they adhere to «principles».

    The latter category know who you are. You downvoted this comment.

    • I recently provided guidance to state legislators, with that guidance making its way into law in regards of balcony solar. If you don’t think that making law works, I would encourage you to get involved somewhere that means something to you.

      It turns out that if you present as an honest, non-interested party, people will call you and ask you for your advice. I do admit that the ease of this is going to be a function of the people you are up against and the subject being regulated. My point of this comment is: default to action. “You can just do things.”

      1 reply →

    • > Laws appear to have fallen out of fashion.

      Laws are very much fashionable, but only for us. “Rules for thee but not for me” is what's in season right now.

      1 reply →

What do we do? We treat platforms with algorithmic news feeds as publishers not platforms in the Section 230 sense.

Think about it this way: imagine if you took a million random posts or videos. You would find a wide range of political views, conspiracy theories and so on. Whatever your position on any of those issues, you could find content pushing those views.

So if your algorithm selects and distributes content that fits your desired views and suppresses content that opposes your views, how are you different from a random publisher who posts content with those exact same views?

This is kind of like the "secret third thing" of Section 230 where you get all the protections of being a platform and all the flexibility of being a publisher and we need to close that loophole. Let platforms choose which one they are.

Another example: if I create a blog and write a post that accuses my local mayor of being a drug addict and a pedophile, I can be sued for defamation. You can try the journalism defense but it won't shield you from defamation. Traditoinal media outlets are normally very careful about what they publish for this reason.

But what if I run Facebook or Twitter and one of my users says the exact same thing? Well I'm just a platform. I have a libel shield. But again, my algorithm can promote or suppress that claim. Even if I have processes to moderate that content, either by responding to a court order to take it down and/or allowing users to flag it and then take it down myself with human or AI moderation, the damage can't really be rolled back.

We've let tech companies get away with "the algorithm" being some kind of mysterious and neutral black box that just does stuff and we have no idea what. It's complete bullshit. Every behavior of such an algorithm reflects a choice made by people, period. And we need to start treating this as publishing.

[flagged]

  • Nothing is inherently illegal. Laws are created in response to an undesireable outcome - murder wasn't illegal until it was made illegal.

  • [flagged]

    • Consuming social media doesn't have an inescapable negative impact on other people, unlike burning leaded fuel. In the same way that eating junk food doesn't. Should we ban junk food? What else do you want to ban from others just because it has a risk profile you personally don't feel comfortable with?

      4 replies →

    • I wonder where folks like this came from, and at what point did people who associate themselves with hacker culture decide that censorship is great.

      The OG hackers thought of censorship as network damage that needed to be routed around.

      People who support censorship always think of themselves as smarter than the rest. Dunning-Krueger however would suggest something different.

      1 reply →

  • > >"What do we do about it?"

    > nothing. if it isn't illegal, it isn't illegal.

    Are you suggesting that because something isn't illegal, it shouldn't be illegal?

    Are you perhaps a representative of the Triangle Shirtwaist Factory?

  • I'm not suggesting that it should be illegal, I'm just seeing this monetization of bad vibes and wondering how we can have less bad vibes. Pump the brakes a little.