Meta and TikTok let harmful content rise to drove engagement, say whistleblowers

5 days ago (bbc.com)

I feel like this is general knowledge for the past 5 or so years, but the real question is "What do we do about it?". Personally, I put real effort into not spending time being outraged online, but this is a societal ill that's bigger then I am...

  • "What do we do about it?"

    Shut down the behavior with regulations or shut down the companies. Meta and TikTok have no natural right to exist if they are a net negative to society.

    • Specifically, I believe Section 230 protections shouldn't apply to algorithmicly promoted content. TikTok hosting my video isn't inherently an endorsement of what I'm saying, but proactively pushing that video to people is functionally equivalent even if you want to quible over dictionary definitions. These algorithms take these platforms from dumb content-agnostic pipes that deserve protections to editorial enterprises that should bear responsibility for what they promote.

      48 replies →

    • oddly enough the TikTok referred to here was to be shut down in the US. But then the executive branch ignored the law while it could organize handing the company over to Larry Ellison instead. But these allegations date to when the company was fully under the control of ByteDance, and not US-regulated entities at all.

      1 reply →

    • Wouldn’t we need to shut down all news outlets, all the twitters and all the newspapers then? They might not be on the toxic spectrum as meta/tiktok, but are very close

      2 replies →

    • >> Meta and TikTok have no natural right to exist if they are a net negative to society.

      Exactly. And when we are done with them we will shut down Molson and Anheuser-Busch. Then we can go after the people who make selfy sticks. Then the company that owns that truck that cut me off last week. Basically, organization who i dislike should not be allowed to exist.

    • Regulating content that makes people enraged seems like a slippery slide towards regulating any kind of "unwanted" speech. I get regulating CSAM, calls for violence or really obvious bullying (serious ones like "kill yourself" to a kid), but regulating algorithms that show rage bait leaves a lot of judgement to the regulators. Obviously I don't trust TikTok or Meta at all, but I don't trust the current or the future governments with this much power.

      For example, some teen got radicalized with racist and sexist content. That's bad in my opinion, as I'm not a racist or a sexist. But should racist or sexist speech be censored or regulated? On what grounds? How do we know other unpopular (now or in the future) speech won't be censored or regulated in the future? Again, as much as I'm not a racist or sexist, I don't think the government should have a say in whether a company should be able to promote speech like "whites/blacks are X" or "men/women are Y". What's next? Should we regulate speech about religion (Christians/Muslims/atheists are Z) or ethics (anti-war people or vegans are Q) or politics or drugs or sex?

      The current situation is shitty, but giving too much power to regulators will likely make it way shittier. If not now, in the future, since passed regulations are rarely removed.

      13 replies →

  • Tax and heavily regulate online advertising. The root of the problem is that it is very, very lucrative to drive engagement and until you get rid of the monetary incentive, the problem will never go away.

  • "Make the drug less good" likely isn't the answer. Nor is banning it.

    What caused Gen Z to drink less than millenials? Maybe Gen Z has the answer.

  • It’s like asking how do you get people to stop drinking alcohol

    As long as there are people who don’t acknowledge or care about the health effects it will exist. If that’s a plurality of your population then you have a fundamental population problem IF you are in the group who thinks it’s bad.

    Aka every minority-majority split on every issue ever.

    So the answer is: live in a society governed by science. Unfortunately none exist

    • > So the answer is: live in a society governed by science. Unfortunately none exist

      Science is a lagging indicator of reality. It is by definition conservative (in that it requires rigorous, repeatable data before it can label something as true). Because of that, there's usually a pretty substantial gap between human discovery and scientific consensus.

      Mindfulness was discovered, as an example, to be beneficial as far back as 500 BCE. It wasn't "proven" with science until 1979.

      Sometimes we just need to rely on lived experience to make important decisions, especially regulation. We can't always wait for science.

      3 replies →

    • I drink, but I acknowledge and care about the health effects. I care more about how it makes me feel. Don't assume everyone who smokes or drinks alcohol or takes another type of drug just doesn't care. Why don't we ban dangerous sports like rock climbing or BASE jumping or MMA while we're at it?

    • We handled smoking pretty well by making it cost more and banning it in public places. If tiktok was banned from official app stores it would essentially go away.

      12 replies →

    • It's like how do you get people to stop letting their kids drink alcohol.

      Everyone knows what the dangers of alcohol are now. We need to get reliable data one can base policy on and then let the public health system do their thing. Maybe not every health authority but enough of them to protect the species at large. Then we'll get social media out of schools, away from young people, vulnerable folks, etc.

  • > "What do we do about it?"

    I'd suggest something like banning algorithmic amplification - your feed is posts of people you follow and nothing else. But that's not what will happen. What will happen is there will be [1] vague laws about preventing vague "harm", written to give legal teeth to the Overton window. Not in those words, but companies that would go against it will be mired in lawfare, while those that comply will be allowed to grow.

    And if you complain, they'll motte-and-bailey you - you're not in favor of "harm", are you? We're not an authoritarian speech police, we only seek to protect people from "harm".

    [1] Or rather, are - see https://en.wikipedia.org/wiki/Online_Safety_Act_2023

  • My IG feed is largely taken over by congressional members videos, crazy $#!t the president (and his crew) says, and the keystone cops. And boy howdy is there a lot of rage inducing behavior going on.

    I feel more informed than if I was only listening to NPR.

    That said, I stay away from anything that’s produced—sound track, too many cuts/edits, talking head commentary. I guess in this context, if I’m going to be driven to emotional anxiety, it’s going to be from something that happened or something someone said, and not the internet’s interpretation.

    You can’t “produce content” that I will watch _as news_. It has to be in some real way happening (with some deference to Rashomon).

  • The people who were voted to power (across the globe, not just the US) to do something about it are stuck getting their dopamine kicks posting garbage on the same platforms. It’s truly a terrible timeline we are in.

  • Regulate it. Laws, consequences, etc.

    • Laws appear to have fallen out of fashion. And a disturbing proportion of the loudest people like it. Then you have those who ought to know better but are attention-seeking, selfish assholes who somehow find it «interesting» or think they adhere to «principles».

      The latter category know who you are. You downvoted this comment.

      4 replies →

  • What do we do? We treat platforms with algorithmic news feeds as publishers not platforms in the Section 230 sense.

    Think about it this way: imagine if you took a million random posts or videos. You would find a wide range of political views, conspiracy theories and so on. Whatever your position on any of those issues, you could find content pushing those views.

    So if your algorithm selects and distributes content that fits your desired views and suppresses content that opposes your views, how are you different from a random publisher who posts content with those exact same views?

    This is kind of like the "secret third thing" of Section 230 where you get all the protections of being a platform and all the flexibility of being a publisher and we need to close that loophole. Let platforms choose which one they are.

    Another example: if I create a blog and write a post that accuses my local mayor of being a drug addict and a pedophile, I can be sued for defamation. You can try the journalism defense but it won't shield you from defamation. Traditoinal media outlets are normally very careful about what they publish for this reason.

    But what if I run Facebook or Twitter and one of my users says the exact same thing? Well I'm just a platform. I have a libel shield. But again, my algorithm can promote or suppress that claim. Even if I have processes to moderate that content, either by responding to a court order to take it down and/or allowing users to flag it and then take it down myself with human or AI moderation, the damage can't really be rolled back.

    We've let tech companies get away with "the algorithm" being some kind of mysterious and neutral black box that just does stuff and we have no idea what. It's complete bullshit. Every behavior of such an algorithm reflects a choice made by people, period. And we need to start treating this as publishing.

  • [flagged]

"Harmful content" translation: What the government do not like. What ISRAEL does not like. Another call for more censorship from a force financed state propaganda outlet nobody with a brain takes seriously. How original.

  • I don't know I have straight up calls for violence against jews on my TikTok pop up and when I report them, TikTok almost always comes back with "No Violations Found"

  • > Internal research shared with the BBC showed comments on Reels had significantly higher prevalence of bullying and harassment, hate speech, and violence or incitement than elsewhere on Instagram.

    I mean, if you want to claim the Jews (let's be honest about what you mean by ISRAEL) are opposed to the above, then... good?

I can't say I'm surprised and I think most people wouldn't be surprised either. But it's always good to have evidence.

Is this unavoidable? I mean it does generate clicks and views and user engagement so if one platform is doing it, doesn't that automatically mean that the other has to do it? Otherwise they will continuously lose market share.

  • > I mean it does generate clicks and views and user engagement so if one platform is doing it, doesn't that automatically mean that the other has to do it? Otherwise they will continuously lose market share.

    Why? User engagement isn't the same thing as market share.

    If McDonald's trained its cashiers to insult you while taking your order, engagement would go up, and market share would go down.

  • I think the burden to curate your feed so that you do not have such content is now resting with the user and they cannot rely on the platform to do it for them.

    • This assumes the platforms either allow such curation or respect it, which instagram and TikTok famously don’t

    • If the user even wants to do that. Why would they? They're looking for a sugar rush, they're not looking to eat their intellectual vegetables. How do you get children to eat vegetables?

      1 reply →

Of course they did. As long as they're legally allowed to do so and profit from doing so they will continue.

Does anyone know of watchdog agencies that do the research to document and litigate harmful algorithmic trends?.

I know https://www.reset.tech/ does really good work in this space, but are there others, and who is funding them?

It's the same story since at least 2012. It is well documented in the book "The chaos machine" by Max Fisher.

Facebook employees, journalists and psychologists have studied the phenomenon and Facebook's (as well as Youtube's) response is always the typical "We have done something" to calm the protest, but it's never really the case. It's a constant game of deflecting, delaying, diminishing, denying.

As long as the general public respond to sensationalism, what’s stopping the social media platforms from exploiting.

Most of them are click baits anyways.

If you make 20 billion and the fine is 0.2 billion…. I don’t think they care about their users mental health.

The idea that there is a certain category of content that is harmful and there are certain people who have the authority to declare what is harmful is extremely dangerous, practically how every single censorship system has ever been built.

Given how TikTok "trends" seem to consist mostly of "get teenagers to do stuff that causes huge expenses for US society":

* "eat tide pods" * "stick a fork in electrical sockets in your school" * "destroy your school's shit" aka "Devious Licks" - bathrooms, chromebooks (jamming stuff into the charging ports to start fires...) * "drink a shitload of Benadryl to see what happens" * "steal a kia/hyundai and drive 80mph, run from the cops, etc"

...convince me that this is not a purposeful attack on US society by the CCP?

  • Bingo. Not to mention the constant bullshit excuses back from TikTok claiming they were never the source of the trend and “their moderation stops it”.

  • Given that the 'tide pod challenge' was before TikTok's time and took place on wholly US-owned platforms like YouTube, we can safely assume it's all in your head. Most of the other stuff you're sharing sounds like a reflection of what you find out in the streets of any major US city. Perhaps you should question if your government is the one that is attacking you.

Feels like this is more of an incentive problem than a moderation problem. If engagement is the primary metric, then anything that drives strong reactions will naturally get amplified.

The feedback loop for this moral hazard is slow but implacable. You can treat the zeitgeist as a dumping ground for so long, until you get so big, that you can no longer treat it like an idealized infinite substance.

I remember The Social Dilemma’s entire premise was basically this headline minus TikTok, and that came out what? 7 or 8 years ago?

Not saying “well duh” I just think at this point I have to ask “are we going to do anything about it?”

We’ve known about the financial incentives to promote anger and outrage online for at least a decade now. So what are we going to do about it?

  • What can you do about it? That is the rub. You can't. It is no coincidence that pretty much all avenues of information consumption you face are susceptible to this issue. It is by design that these technologies are able to reach you in these ways. It is by design that propagandists have so much success. Everyone in power today is in power because of propaganda. Why would they ever let go of their reigns of power? It is the sole forcing factor keeping them in power after all. They'd be no different than you and I otherwise, which scares them more than anything.

    • Legislate! We need laws! I get we aren’t used to that anymore in the US but truly “marketing” and social media in the US has become so hostile and harmful I just don’t understand how we can in good conscience not start to put heavier restrictions on them. Enough is enough. We can’t continue to sacrifice our society on the altar of the Almighty dollar.

      2 replies →

Sadly, nothing new. Why do they do it? Because they can. That regulators let companies operate this way is a massive failure.

British people complaining about free speech and trying to censor the internet. America needs to keep standing up to British censorship interests.

As someone who uses IG a lot. I have found this to be overwhelmingly true. Very often when i stumble upon a controversial video the very top comment is a ratioed hot take on the topic, as if meta purposely put the comment at the top to ruffle feathers. On top of that, when i find controversial topics(like the moon landing), a large majority of comments are leaning to one, extreme opinion with all the other differing opinions pushed to the very very far bottom of the comment section

Since a long time whistleblowers aren't needed to say the obvious and self-evident about online media. A thoughtful user can realize it instantly. From the era of b/w TV programs until now the content has the same goal. I believe after enough iterations of user control the delivery will become regulated like drugs. Now it's starting for kids.

Throw away your 'smartphone' and stop using anti-social media. It is killing society, and only making the Billionaires more powerful. They are evil and will do anything to stay in power.

In my experience there’s a strong “banality of evil” that happens.

Some poor schlub ML Eng has shipped a feature that wins an A/B test. They’re pushing to get promoted. Their management wants to show they’re hitting their KPIs.

An engine of destruction filled with well meaning people just hoping to advance in their careers.

You might say, it’s ultimately the designers of the incentives that matter. Even there, the leadership will change. Inevitably the needs of the capitalist machine take over.

  • Zuckerberg himself demanded these features. Some board members even protested, yet they got ignored. Also, you can't tell me that the "engineers" at Meta don't know what they are building. Some may be in denial, blinded by their fat paycheck, but I assume most just don't care.

When I hear "Meta" and "Facebook" the top 10 things I think:

1. "Surveillance"

2. "Advertising"

3. "Scams"

4. "AI slop"

5. "Manipulated experience"

6. "Child harms"

7. Misinformation campaigns.

8. Disinformation campaigns.

9. "Doom scroll regret"

10. "Zuckavatarphilia"

But I don't claim to have the "right" opinion and am curious how other people respond to the brands. If each of you could reply, and re-list those associations in the order you experience them, I will collate the results and post them everywhere I can think of. It would go a long ways to satisfying my curiosity, and the curiosity of reporters that like to repeat things they read on the internet.

If you like better content look for kagi's small web or better yet find a better algorithm that optimizes for your preferences rather than engagement.

I have my instagram, x on a locked down browser in a container with a fake profile that an LLM drives and finds the posts for specific users and compiles a gist of all the important things in my locality(or what u care about) every evening, without me ever going near that FOMO driven dumpster fire of tiktok/insta/x.

Best LLM RoI I made.

I look at people who use fb or tiktok, or x, the same way I look at smokers or alcoholics. With sadness and pity. The fact that we let children use this is hard to accept. The fact that fellow hackers and engineers, some of the brightest minds, have contributed to this is extremely disappointing. Shame on you.

  • the bucket of crabs truly pervades in its metaphorical accuracy. regardless as to intelligence, humans are liable to drag down their fellow men. insane to consider that children are effectively drugged from infancy. for this i do not blame an uneducated society strained to its zenith; i blame the sociopathic and the craven who have enabled the proferring of digital drugs, and consequently accelerated societal addiction. the shame falls entirely on them. may reincarnation be real such that sadistic six figure salaried software engineers and their malicious managers are forced to reap the rewards of such "engineering".

This has been known forever… they shouldn’t exist. Anyone shocked to know Zuckerberg’s company does it? The guy started by stalking and ranking girls at college FFS.

The fact that they are still allowed to operate this way is extremely frustrating. The damage that is done by these "social media" platforms has been know for over a decade now, yet nothing ever happens... FB and TikTok should be treated as what they are: dangerous digital drugs engineered by sociopaths.

What!? I’m shocked! Shocked I say!

Is it really whistleblowing when everyone already knows it?

Why are social media platforms picked on?

Did we forget Gresham's Law applies to content and has done so since humans could communicate?

Bad or wrong ideas are the ones that get talked about. Do we discuss the 10 issues politicians get correct, or the 1 they screw up?

Platform is irrelevant here; the exact same phenomena occurs/ed on radio and TV decades before it did on social media platforms, and in news papers centuries prior.

  • > since humans could communicate

    You have finally identified the problem. It all started with Homo habilis and misinformation has been rampant ever since. But even protozoan parasites mimic host proteins and block signals, so you really have to go a lot further back to deal with fake news.