Comment by blibble

5 days ago

I think it's worse, cigarettes never threatened democracy

the solution is real easy, section 230 should not apply if there's an recommendation algorithm involved

treat the company as a traditional publisher

because they are, they're editorialising by selecting the content

vs, say, the old style facebook wall (a raw feed from user's friends), which should qualify for section 230

> cigarettes never threatened democracy

Off topic, but I bet a book on tobacco cultivation/history would be fascinating. Tobacco cultivation relied on the slave labor of millions and the global tobacco market influenced Jefferson and other American revolutionaries (who were seeing their wealth threatened). I've also read that Spain treated sharing seeds as punishable by death? The rare contrast that makes Monsanto look enlightened!

  • Mm, definitely. I think it's probably the cash crop that has historically been the most intertwined with politics, even more so than sugar.

    Central America, the Balkans, the Levant. The Iroquois and Algonquians. Cuba. The Medicis and the Stuarts. And, as you say, revolutionary Virginia and Maryland. Lots of potential there for a grand narrative covering 600 years or more!

    (And, to gp: yes, it absolutely did threaten governments, empires, and entire political systems!)

  • Something like The Prize for the tobacco industry could be very interesting!

The problem with this is that section 230 was specifically created to promote editorializing. Before section 230, online platforms were loath to engage in any moderation because they feared that a hint of moderation would jump them over into the realm of "publisher" where they could be held liable for the veracity of the content they published and, given the choice between no moderation at all or full editorial responsibility, many of the early internet platforms would have chosen no moderation (as full editorial responsibility would have been cost prohibitive).

In other words, that filter that keeps Nazis, child predators, doxing, etc. off your favorite platform only exists because of section 230.

Now, one could argue that the biggest platforms (Meta, Youtube, etc.) can, at this point, afford the cost of full editorial responsibility, but repealing section 230 under this logic only serves to put up a barrier to entry to any smaller competitor that might dislodge these platforms from their high, and lucrative, perch. I used to believe that the better fix would be to amend section 230 to shield filtering/removal, but not selective promotion, but TikTok has shown (rather cleverly) that selective filtering/removal can be just as effective as selective promotion of content.

  • Moderation and recommendation are not the same thing.

    • When you have a feed with a million posts in it, they are. There is no practical difference between removing something and putting it on page 5000 where no one will ever see it, or from the other side, moderating away everything you wouldn't recommend.

      Likewise, if you have a feed at all, it has to be in some order. Should it show everyone's posts or only people you follow? Should it show posts by popularity or something else? Is "popularity" global, regional, only among people you follow, or using some statistics based on things you yourself have previously liked?

      There is no intrinsic default. Everything is a choice.

      5 replies →

  • Platforms routinely underinvest in trust and safety.

    T&S is markedly more capable in the dominant languages (English is ahead by far).

    Platforms make absurd margins when compared to any other category of enterprise known to man.

    They operate at scales where a 0.001% error rate is still far beyond human capability to manually review.

    Customer support remains a cost center.

    Firms should be profitable and have a job to do.

    We do not owe them that job. Firms are vehicles to find the best strategies and tactics given societal resources and goals.

    If rules to address harms result in current business models becoming unviable, then this is not a defense of the current business model.

    Currently we are socializing costs and privatizing profit.

    Having more customer support, more transparency, and more moderation will be a cost of doing business.

    Our societies have more historical experience thinking about government capture than flooding the zone style private capture of speech.

    America developed the FDA and every country has rules on how hygiene should be maintained in food.

    People still can start small, and then create medium or large businesses. Regulation is framed for the size of the org.

    Many firms fail - but failure and recreation are natural parts of the business cycle.

  • This is the first time I've ever heard somebody claim that section 230 exists to deter child predators.

    That argument is of course nonsense. If the platform is aware of apparent violations including enticement, grooming etc. they are obligated to report this under federal statute, specifically 18 USC 2258A. Now if you think that statute doesn't go far enough then the right thing to do is amend it, or more broadly, establish stronger obligations on platforms to report evidence of criminal behavior to the authorities. Either way Section 230 is not needed for this purpose and deterring crime is not a justification for how it currently exists.

    The final proof of how nonsensical this argument is, is that even if the intent you claim was true, it failed. Facebook and Instagram are the largest platforms for groomers online. Nazi and white supremacy content are everywhere on these websites as well. So clearly Section 230 didn't work for this purpose. Zuck was happy to open the Nazi floodgates on his platforms the moment a conservative President got elected. That was all it took.

    The actual problem is that Meta is a lawless criminal entity. The mergers which created the modern Meta should have been blocked in the first place. When they weren't, Zuck figured he could go ahead and open the floodgates and become the largest enabler of CSAM, smut and fraud on earth. He was right. The United States government has become weak. It doesn't protect its people. It allows criminal perverts like the board of Meta and the rest of the Epstein class to prey on its people.

    • Reporting blatant criminal violations is not the same thing as moderating otherwise-protected speech that could be construed as misleading, offensive, or objectionable in some other way.

      1 reply →

  • Even if they can't afford it... Too bad for them?

    I am kind of rooting for the AI slop because the status quo is horrific, maybe the AI slop cancer will put social media out of its misery.

    • Sweet best back-and-forth All-sides on this topic. It’s very complex. On what rules ought we regulate, if any? Probably some somehow.

  • Section 230 being repealed doesn't mean that any moderation will be treated as publication. The ambient assumptions have changed a lot in the past 30 years. Now nobody would think that removing spam makes you liable as a publisher.

    Algorithmic feeds are, prima facie, not moderation, not user-created content and do not fall under the purview of section 230.

    We all know why they're really doing it, though.

> As interpreted by some courts, this language preserves immunity for some editorial changes to third-party content but does not allow a service provider to "materially contribute" to the unlawful information underlying a legal claim. Under the material contribution test, a provider loses immunity if it is responsible for what makes the displayed content illegal.[1]

I'm not a lawyer, but idk that seems pretty clear cut. If you, the provider, run some program which does illegal shit then 230 don't cover your ass.

[1] https://www.congress.gov/crs-product/IF12584

You can draw a fairly clear line from the corporate response to cigarettes being regulated through to the strategy for climate change and social media/crypto etc.

The Republicans are basically a coalition of corporate interests that want to get you addicted to stuff that will make you poor and unhealthy, and underling any collective attempt to help.

The previous vice-president claimed cigarettes don't give you cancer and the current president thinks wind turbine and the health problems caused by asbestos are both hoaxes. This is not a coincidence.

The two big times the Supreme Court flexed their powers were to shut down cigarette regulation by the FDA and Obama's Clean Power plan. Again, not a coincidence.

  • That's because we / our (USA) country is owned. As Carlin said, "It's a big club. And you ain't in it."[0]

    But what isn't properly addressed when people link to this is that the real issue he's discussing is our failing educational system. It's not a coincidence that the Right attacks public schools and the orange man appointed a wrestling lady to dismantle the dept of education.[1]

    0. https://www.youtube.com/watch?v=sNXHSMmaq_s

    1. The Trump Administration Plot to Destroy Public Education - https://prospect.org/2026/01/13/trump-mcmahon-department-edu...

    Aside: I was in the audience for this show (his last TV special). Didn't know it'd be shot for TV. Kind of sucked, actually, cause they had lights on the audience for the cameras and one was right in my eyes. Anyway, a toast to George Carlin who was ahead of his time and would hate how right he's been.

THIS, EXACTLY!

If there is an algorithm, the social media platform is exactly as responsible for the content as any publisher

If it is only a straight chronological feed of posts by actually followed accts, the social media platform gets Section 230 protections.

The social media platforms have gamed the law, gotten legitimate protections for/from what their users post, but then they manipulate it to their advantage more than any publisher.

>>the solution is real easy, section 230 should not apply if there's an recommendation algorithm involved

>>treat the company as a traditional publisher

>>because they are, they're editorialising by selecting the content

>>vs, say, the old style facebook wall (a raw feed from user's friends), which should qualify for section 230

They fought a civil war over the labor required to produce tobacco.

  • > cigarettes never threatened democracy

    "Democracy" itself was not at stake in the American Civil War because both sides practiced it. The Confederacy was/would have been a democracy analogous to ancient Athens--one where slaves (and women) were excluded from political participation. The vast majority of Confederate politicians, including Jefferson Davis, came from the "Democratic Party"--which, true to its name, championed enfranchisement for the "common (white) man" as opposed to control by elites.

    Perhaps a better example is the "Tobacco War" of 1780 in the American Revolution, where Cornwallis and Benedict Arnold destroyed massive quantities of cured tobacco to try to cripple the war financing of the colonies.

    Control of tobacco in Latin/South America since the 1700s (Spain's second-largest source of imperial revenue after precious metals) also had a directly stifling effect on democratic self-governance.

    • I think the point is a significant number of human beings were not participating in democracy at the time because their forced labor was critical to propping up the tobacco (and other) industries.

      It’s hard to claim it’s actually democracy when it only exists after stripping the rights from a large section of people who would disagree with you, if they had the power to do so.

Social media cannot "threaten democracy". Democracy means that we transfer power to those who get the most votes.

There's nothing more anti-democratic than deciding that some votes don't count because the people casting them heard words you didn't like.

The kind of person to whom the concept of feed ranking threatening democracy is even a logical thought believes the role of the public is to rubber stamp policies a small group decides are best. If the public hears unapproved words, it might have unapproved thoughts, vote for unapproved parties, and set unapproved policy. Can't have that.

  • That trivial definition sees limited use in the real world. Few countries that are popularly considered democratic have direct democracy. Most weigh votes geographically or use some sort of representative model.

    Most established definitions of democracy goes something like, heavily simplified:

    1. Free media

    2. Independent judicial system

    3. Peaceful system for the transfer of power

    The most popular model for implementing (3) is free and open elections, which has yielded pretty good results in the past century where it has been practiced.

    Considering social media pretty much is media for most, it is a heavily concentrated power, and if there can any suspicions of being in cahoots with established political power and thus non-free, surely that is a threat to democracy almost by definition.

    Let's be real here: It has been conclusively shown again and again that social media does influence elections. That much should be obvious without too much in the way of academic rigor.

    • Of course social media influences elections. Direct or indirect, the principle of democracy is the same: the electorate hears a diversity of perspectives and votes according to the ones found most convincing.

      How can you say you believe in democracy when you want to control what people hear so they don't vote the wrong way? In a democracy there is no such thing as voting the wrong way.

      Who are you to decide which perspectives get heard? You can object to algorithmic feed ranking only because it might make people vote wrong --- but as we established, the concept of "voting wrong" in a legitimate democracy doesn't even type check. In a legitimate democracy, it's the voting that decides what's right and wrong!

      3 replies →

> never threatened democracy

The beautiful part is how non-partisan this is. It cooks all minds regardless of tribe.

Why change section 230? You can just make personalized algorithmic feeds optimized for engagement illegal instead, couldn't you? What advantage does it have to mess with 230, wouldn't the result be the same in practice?

  • 230 is an obvious place to say “if you decide something is relevant to the user (based on criteria they have not explicitly expressed to you), then you are a publisher of that material and are therefore not a protected carriage service.

  • The solution must be a social one: we must culturally shun algorithmic social media, scold its proponents, and help the addicted.

    We aren't going to be able to turn off the AI content spigot or write laws that control media format and content and withstand (in the US) 1st amendment review. But we can change the cultural perception.

    • We aren't going to stop algorithmic social media through sheer force of public will without government involvement.

      Social communities aren't nimble. There a ton of inertia in a social media platform. People have their whole network, all their friends, on the platform; and all friends have their friends on the platform; etc. So in order to switch from one platform to another, you need everyone to switch at the same time, which is extremely hard.

      Facebook started out pretty nice. You saw what your friends posted and what pages you follow posted, in chronological order. It had privacy issues, but it worked more or less how we'd want to, with no algorithmic timeline. But they moved towards being more and more algorithmic over time. Luckily, Facebook was bad enough that it has gotten way less popular, but that has taken a long time.

      Twitter is the same. It started out being the social media platform we want: you saw what your followers posted or boosted, chronologically. No algorithmic feed. But look where it is now. Thankfully, Musk's involvement has made plenty of people leave, but there were a lot of years where everyone, regardless of political leaning, were on Twitter with an algorithmic timeline. Even though a lot of people complained about the algorithmic timeline when it was introduced, they stayed on Twitter because that's where everyone they knew were.

      YouTube too. For a long time, the only thing you saw on YouTube was what people you've subscribed to posted. It built up a huge community and became the de facto video sharing platform as a nice non-algorithmic site, and then they turned the key and went all in on replacing the subscription feed with the algorithmic feed. Now they've even adopted short-form video where you aren't even supposed to pick which video you wanna watch, you're just supposed to scroll. And replacing YouTube is hard due to its momentum.

      So even if everyone agrees that algorithmic feeds are terrible and move to a non-algorithmic platform over the next few decades, what do you propose we do when that new platform inevitably shifts towards being an algorithmic platform? Do we start a new multi-decade long transition to yet another platform?

    • It's really simple in the US: stop granting exemptions for the harm the content causes. Social media _is_ publishing. Expecting people to 'eat their vegetables' when only fast food is on offer is realistic, and flies in the face of all we know about the environmental drivers of public health.

      2 replies →

If your tree is so weak that a single breeze can knock it off, why blame the wind? Disclaimer: I hate social media of all kinds, it's just that you're missing the forest.

  • The force of social media these past 20 years has been massive. We're talking radical change to the structure of information flow in society. That's not just a small breeze.

  • The breeze is more like a 2 ton harvester expertly engineered to knock your tree down.