Comment by ToucanLoucan
1 day ago
Step 1: remove Section 230 protection for algorithmically-elevated content.
If you're going to have attention-mining addiction-creating software funnel people into rabbit holes, then those rabbit holes need to be verified, safe-to-consume stuff. Watching 5 hours of 5 minute crafts is at worst, going to make someone spend too much money at Hobby Lobby. Certainly not good, but a workable issue. Watching 5 hours of white supremacist propaganda is how you get our current sociopolitical climate.
How is that a "step 1" when thats describing something else entirely?
How much would you pay to own an account on social media? If your answer is $0 then you're not addressing anything, you just want someone else to subsidize your entertainment and you want to call the shots on top.
I don't work for free, and I know damn well neither do you.
> How is that a "step 1" when thats describing something else entirely?
You asked "how do we change that" and I'm assuming the "that" referred to the subject of the PC: "The damage of an advertising-based internet economy" which in turn exists in the context of the linked video in the OP, which enunciates the consequences of machine learning being applied to creating hyper-addictive and extremist social media websites, in 2017 by the way, and the speakers broad hypothesis seems, in my eyes, broadly confirmed.
And the principle issue there is thus: an algorithm that consistently directs you to more concentrated and extreme versions of whatever you're consuming (vegetarian -> vegan, for example) can be utterly benign or perhaps annoying in that context, but gets notably darker when it's moving people from Donald Trump's rallies to The Jewish Question.
I have no issue at all with the former example, I have a LOT of issues with the latter.
> How much would you pay to own an account on social media? If your answer is $0 then you're not addressing anything, you just want someone else to subsidize your entertainment and you want to call the shots on top.
In that equation, I'm the product. I have every right to call the shots because the social media company only makes money by my participation in it, which is why I left Facebook and have only atrophied, ancient presences on most websites. I'm fine being shown ads for weird tech junk I might find cool. I'm not fine having the intricacies of my personal beliefs sanded off by weirdos trying to sell white supremacy like it's Pepsi.
> You asked "how do we change that"
He did not, you are ascribing another person's quote to them.
> And the principle issue there is thus: an algorithm
Section 230 does not stop people from using algorithms to enforce harmful content consumption loops for the purposes of selling advertisements. It specifically protects them from the consequences, but repeal that and now you've created a common incentive to sell Cocomelon for American adults.
We keep Section 230 because, even when it's retards like Elon Musk at the reigns, adults deserve to be treated with maturity and respect.
Would you support blocking BLM and black supremacist propaganda too? Essentially you are just proposing traditional government censorship. The good thing about Soviet TV is that it had only wholesome content - not that western capitalist stuff.
BLM content does not promote hate the way white nationalist content does and I'm immediately suspicious of your motives with you trying to make that equivalence. BLM is about justice and equality under the law. White supremacy is decidedly not, like, it's in the name. That's the supremacy part.
As for black supremacist content, yeah nix that shit too. It's corrosive for the exact same reasons. Was this supposed to be a hard question?
They aren't proposing blocking content. If a business-controlled algorithm recommends something, the company should be responsible for what it pushes because amplification falls outside what Section 230 protects. Hosting is protected. Deliberate, profit-driven curation is not.