Comment by jballanc
6 days ago
The problem with this is that section 230 was specifically created to promote editorializing. Before section 230, online platforms were loath to engage in any moderation because they feared that a hint of moderation would jump them over into the realm of "publisher" where they could be held liable for the veracity of the content they published and, given the choice between no moderation at all or full editorial responsibility, many of the early internet platforms would have chosen no moderation (as full editorial responsibility would have been cost prohibitive).
In other words, that filter that keeps Nazis, child predators, doxing, etc. off your favorite platform only exists because of section 230.
Now, one could argue that the biggest platforms (Meta, Youtube, etc.) can, at this point, afford the cost of full editorial responsibility, but repealing section 230 under this logic only serves to put up a barrier to entry to any smaller competitor that might dislodge these platforms from their high, and lucrative, perch. I used to believe that the better fix would be to amend section 230 to shield filtering/removal, but not selective promotion, but TikTok has shown (rather cleverly) that selective filtering/removal can be just as effective as selective promotion of content.
Moderation and recommendation are not the same thing.
When you have a feed with a million posts in it, they are. There is no practical difference between removing something and putting it on page 5000 where no one will ever see it, or from the other side, moderating away everything you wouldn't recommend.
Likewise, if you have a feed at all, it has to be in some order. Should it show everyone's posts or only people you follow? Should it show posts by popularity or something else? Is "popularity" global, regional, only among people you follow, or using some statistics based on things you yourself have previously liked?
There is no intrinsic default. Everything is a choice.
I remember back in the day when Google+ was just launched. And it had promoted content. Content not from my 'circles' but random other content. I walked out and never looked back.
Of course, Facebook started doing the same.
The thing is, anything from people not explicitly subscribed to should be considered advertorial and the platform should be responsible for all of that content.
While I agree "There is no intrinsic default. Everything is a choice." and "There is no practical difference between removing something and putting it on page 5000" and similar (see my own recent comments on censorship vs. propaganda):
> Should it show everyone's posts or only people you follow?
Only people (well, accounts) you follow, obviously.
That's what I always thought "following" is *for*, until it became clear that the people running the algorithms had different ideas because they collectively decided both that I must surely want to see other content I didn't ask for and also not see the content I did ask for.
> Should it show posts by popularity or something else? Is "popularity" global, regional, only among people you follow, or using some statistics based on things you yourself have previously liked?
If they want to supply a feed of "Trending in your area", IMO that would be fine, if you ask for it. Choice (user choice) is key.
I think maybe you shouldn't have a feed with a million posts in it? Like how many friends do you have? And how often do they post?
1 reply →
Early days facebook was simple: 1) You saw posts from all people you were connected to on the platform. 2) In the reverse order they were posted.
I can tell you it was a real p**r when they decided to do an algorithmic recommendation engine - as the experience became way worse. Before I could follow what my buddies were doing, as soon as they made this change the feed became garbage.
The way modern social media platforms are designed, yes they are.
The point is that they don't have to be. You can moderate (scan for inappropriate content, copyrighted content, etc) without needing to have an algorithmic recommendation feed.
Platforms routinely underinvest in trust and safety.
T&S is markedly more capable in the dominant languages (English is ahead by far).
Platforms make absurd margins when compared to any other category of enterprise known to man.
They operate at scales where a 0.001% error rate is still far beyond human capability to manually review.
Customer support remains a cost center.
Firms should be profitable and have a job to do.
We do not owe them that job. Firms are vehicles to find the best strategies and tactics given societal resources and goals.
If rules to address harms result in current business models becoming unviable, then this is not a defense of the current business model.
Currently we are socializing costs and privatizing profit.
Having more customer support, more transparency, and more moderation will be a cost of doing business.
Our societies have more historical experience thinking about government capture than flooding the zone style private capture of speech.
America developed the FDA and every country has rules on how hygiene should be maintained in food.
People still can start small, and then create medium or large businesses. Regulation is framed for the size of the org.
Many firms fail - but failure and recreation are natural parts of the business cycle.
This is the first time I've ever heard somebody claim that section 230 exists to deter child predators.
That argument is of course nonsense. If the platform is aware of apparent violations including enticement, grooming etc. they are obligated to report this under federal statute, specifically 18 USC 2258A. Now if you think that statute doesn't go far enough then the right thing to do is amend it, or more broadly, establish stronger obligations on platforms to report evidence of criminal behavior to the authorities. Either way Section 230 is not needed for this purpose and deterring crime is not a justification for how it currently exists.
The final proof of how nonsensical this argument is, is that even if the intent you claim was true, it failed. Facebook and Instagram are the largest platforms for groomers online. Nazi and white supremacy content are everywhere on these websites as well. So clearly Section 230 didn't work for this purpose. Zuck was happy to open the Nazi floodgates on his platforms the moment a conservative President got elected. That was all it took.
The actual problem is that Meta is a lawless criminal entity. The mergers which created the modern Meta should have been blocked in the first place. When they weren't, Zuck figured he could go ahead and open the floodgates and become the largest enabler of CSAM, smut and fraud on earth. He was right. The United States government has become weak. It doesn't protect its people. It allows criminal perverts like the board of Meta and the rest of the Epstein class to prey on its people.
Reporting blatant criminal violations is not the same thing as moderating otherwise-protected speech that could be construed as misleading, offensive, or objectionable in some other way.
Indeed. However, there is no universal definition for what offends people, and never will be. People are individuals who form their own opinions and those opinions are diverse.
Ergo if you start to moderate speech which is offensive from one point of view, it will inevitably be inoffensive to others, and you've now established that you're a publisher, not a platform, because you're making opinionated decisions about which content to publish and to whom. At that point the remedy lies in reclassifying said platform as a publisher, and revisiting how we regulate publishers.
They can be publishers. They can censor material they object to. That's fine. But they don't need special exemptions from the rules other publishers follow.
I think it's good to have publishers in the world who are opinionated. There are opinions I don't like and don't want to see very often. Where we get into trouble is when these publishers get classified as platforms by the law, claim to be politically neutral entities, and enjoy the various legal privileges assigned to platforms by Section 230 of the CDA. The purpose of that section was to encourage a nascent tech industry by assigning special privileges to the companies in it. That purpose is now obsolete, those companies are now behaving like publishers, and reform of our laws is necessary.
Even if they can't afford it... Too bad for them?
I am kind of rooting for the AI slop because the status quo is horrific, maybe the AI slop cancer will put social media out of its misery.
Sweet best back-and-forth All-sides on this topic. It’s very complex. On what rules ought we regulate, if any? Probably some somehow.
Section 230 being repealed doesn't mean that any moderation will be treated as publication. The ambient assumptions have changed a lot in the past 30 years. Now nobody would think that removing spam makes you liable as a publisher.
Algorithmic feeds are, prima facie, not moderation, not user-created content and do not fall under the purview of section 230.
We all know why they're really doing it, though.