Comment by munk-a
5 days ago
"What do we do about it?"
Shut down the behavior with regulations or shut down the companies. Meta and TikTok have no natural right to exist if they are a net negative to society.
5 days ago
"What do we do about it?"
Shut down the behavior with regulations or shut down the companies. Meta and TikTok have no natural right to exist if they are a net negative to society.
Specifically, I believe Section 230 protections shouldn't apply to algorithmicly promoted content. TikTok hosting my video isn't inherently an endorsement of what I'm saying, but proactively pushing that video to people is functionally equivalent even if you want to quible over dictionary definitions. These algorithms take these platforms from dumb content-agnostic pipes that deserve protections to editorial enterprises that should bear responsibility for what they promote.
There is a decent legal argument to be made that §230 doesn't immunize platforms for the speech of their algorithm, to the extent that said speech is different from the speech of the underlying content. (A simple, if absurd, example of this would be if I ran a web forum and then created a highlight page of all of the defamatory comments people posted, then I'm probably liable for defamation.)
The problem of course is that it's difficult to disentangle the speech of algorithmic moderation from the speech of the content being moderated. And the minor issue that the vast majority of things people complain about is just plain First Amendment-protected speech, so it's not like the §230 protections actually matter as the content isn't illegal in the first place.
I don't think we even need to go that far. Just remove protection for paid advertisements. It's absurd that Meta cannot be held liable for the ads they promote when a newspaper can be held liable if they were to publish the same ad.
But isn't this difficult when the tech bosses are in cahoots with the country bosses? And honestly even if the leadership changes, I somehow have a feeling the techs will naturally switch boats as well - might be a reason why the opposition doesn't paint them that much nowadays, to make sure they switch along.
1 reply →
How would you square that with a site like Hacker News, which has algorithms for showing user-submitted links and user-generated comments?
Listing content alphabetically or chronologically is technically an "algorithm" too. What I'm specifically challenging here is the personalized algorithm designed to keep individual users on the platform based off a user profile influenced by countless active and passive choices the user has made over time. The type of HN algorithm that serves the same content to every user based off global behavior is fine in my book because it is both less exploitative of the user base and a reflection of that user base's proactive decisions in upvoting/downvoting content.
34 replies →
[dead]
[dead]
Really nice to see someone else bringing this up. Algorithmic editorial decisions are still editorial decisions. I think ultimately search and other forms of selective content surfacing should not have ever been exempt. They were never carriers. I appreciate that this would make the web as we know it unusable. I think failing to tackle this problem has will also make the web unusable, and in a worse way.
> I think ultimately search and other forms of selective content surfacing should not have ever been exempt. They were never carriers. I appreciate that this would make the web as we know it unusable
I can’t be the only one confused at these calls to have the government destroy things like searching the web, am I?
How is this a real idea being proposed on Hacker News, of all places? Not that long ago it was all about freedom on the Internet and getting angry when the government interfered with our right to speech online, and now there are calls to do drastic measures like make search engines legally untenable to run in the United States?
It’s also confusing that nobody calling for banning things or making the web unusable appears to be making the connection that the internet is global. If we passed laws that forced Google and Bing to shut down because they’re liable for results they index, what do you think the population will do? Shrug their shoulders and give up on the internet? Or go use a search engine from another country?
2 replies →
What we need is quite simply a very good protocol for distributed search. It takes some storage, some bandwidth and some cpu cycles. Have people contribute those and earn queries and indexing. Make it very good but simple enough for a half decent programmer to make a lvl 1 node that can only announce it exists. Trackers, supper nodes, ban lists, ranking algo's etc etc Write server code in all the languages, have phone and desktop clients. There can be subscription based clients too so that the cpu, storage, bandwidth can be done for you by a company.
This description is intentionally vague.
This seems the same as news organisations choosing which news to report on, but driven by user behaviour rather than the org's employees themselves.
oddly enough the TikTok referred to here was to be shut down in the US. But then the executive branch ignored the law while it could organize handing the company over to Larry Ellison instead. But these allegations date to when the company was fully under the control of ByteDance, and not US-regulated entities at all.
> oddly enough the TikTok referred to here was to be shut down in the US. But then the executive branch ignored the law while it could organize handing the company over to Larry Ellison instead
Which should make people think twice when they call for government regulation on speech as a solution to content they don't want other people to see.
The more you give the government power to control speech, the more they'll use those laws to further their own interests.
Wouldn’t we need to shut down all news outlets, all the twitters and all the newspapers then? They might not be on the toxic spectrum as meta/tiktok, but are very close
There are people in this thread directly calling for us to strip protections from search engines and force them to shut down.
I think a lot of this discussion has become detached from reality and we’re just entertaining some people’s impossible fantasies about shutting down the internet and returning to the past.
Human instinct is always to ban and fight everything as soon as any change happens in society. The same biological motivation to doomscroll fuels our instincts to panic and doompost about how society is ruined unless we do [brash action].
Then we'll just use the Chinese apps. Or do you plan on shutting down our access to Chinese apps too?
Like TikTok?
"What do we do about it"
Account --> Delete
>> Meta and TikTok have no natural right to exist if they are a net negative to society.
Exactly. And when we are done with them we will shut down Molson and Anheuser-Busch. Then we can go after the people who make selfy sticks. Then the company that owns that truck that cut me off last week. Basically, organization who i dislike should not be allowed to exist.
Regulating content that makes people enraged seems like a slippery slide towards regulating any kind of "unwanted" speech. I get regulating CSAM, calls for violence or really obvious bullying (serious ones like "kill yourself" to a kid), but regulating algorithms that show rage bait leaves a lot of judgement to the regulators. Obviously I don't trust TikTok or Meta at all, but I don't trust the current or the future governments with this much power.
For example, some teen got radicalized with racist and sexist content. That's bad in my opinion, as I'm not a racist or a sexist. But should racist or sexist speech be censored or regulated? On what grounds? How do we know other unpopular (now or in the future) speech won't be censored or regulated in the future? Again, as much as I'm not a racist or sexist, I don't think the government should have a say in whether a company should be able to promote speech like "whites/blacks are X" or "men/women are Y". What's next? Should we regulate speech about religion (Christians/Muslims/atheists are Z) or ethics (anti-war people or vegans are Q) or politics or drugs or sex?
The current situation is shitty, but giving too much power to regulators will likely make it way shittier. If not now, in the future, since passed regulations are rarely removed.
At least in the US the government can't regulate speech (for the most part). But what we could do is regulate recommendation algorithms or other aspects of the overall design in a way that's generalized enough to be neutral in regards to any particular speech. And such regulations don't need to apply to any entity below some MAU or other metric.
Even just mandating interoperability would likely do since that would open up the floor to competitors. Many users are well aware of the issues but don't feel they have a viable alternative that satisfies their goals.
In theory I'm OK (kinda) with regulating the "overall design" somehow, but I don't see how it's going to work. Forced interoperability is a (very?) good idea, as it's really general, but it also doesn't address directly what the article and most comments talk about - the rage bait. I just can't imagine regulations (or "laws" or whatever the correct term is) that deal specifically with the algos that push rage bait that can't be later abused, if passed, to deal with other unpopular speech. And it seems like people want some laws to directly deal with that - the bad types of speech or algos themselves.
To clarify, I use "rage bait" as an example phrase, but it includes algos that only promote engagement at any cost and other things that aren't outright dangerous, but we think are dangerous. Not, like I said, CSAM or yelling FIRE or telling people to kill themselves.
5 replies →
> I get regulating CSAM, calls for violence or really obvious bullying (serious ones like "kill yourself" to a kid)
I’ve reported videos that look like sexual exploitation, videos that call for violence and videos that promote hate (xyz people are cockroaches) and all I’ve gotten is that “it does not go against community guidelines” with a link to block the person who created them. So any concerns of “where do we draw the line” are in my opinion pointless because the bare minimum isn’t even being done.
I agree with your CSAM and explicit calls for violence examples - they probably should be regulated. But a few comments ago in another thread someone didn't like me calling people in the workplace who annoy me with their mindless chit chat "corporate drones". My post could be construed as promoting hate. Where do we draw the line from "cockroaches" to "drones"? Do I have to call a certain "protected class" drones for it to qualify as hate speech?
What if I didn't say anything bad about a race or a sex, but said:
> I have coworkers that pester with me with their small talk about the weather every time I see them. I hate those fucking cockroaches.
That's in bad taste, sure, but should it be regulated? You may know I obviously don't hate-hate them (they're just annoying, but most of them are good people) or actually consider them cockroach-like in any meaningful aspect (they're obviously people, but with annoying tendencies). But would a regulator know the difference? What about a malicious regulator who gets paid by (ok, this is a silly example, but bear with me) the weather-talking coworker lobby to censor me? In many not-so-silly examples a regulator could silence anyone for anything (politics, sex, drugs, ethics), as long as it uses a bad word or says anything negative about anyone. I don't want to live in such a society. That much power would be abused sooner or later.
I'm sorry but are you saying it's hard to figure out what to do so let's do nothing? Banning racist and sexist content is not a slippery slope. It's just banning racist and sexist content, slope is only slippery because the salivating mouths of these social platforms grease them.
Also, I don't think people are advocating censorship, they are advocating not promoting assholes. You can have your little blog and be racist on it all you want, but let's not give these people equivalent of nukes for communication.
> are you saying it's hard to figure out what to do so let's do nothing?
I'm fine with doing something, but the current "something" seems slippery.
> Banning racist and sexist content is not a slippery slope. It's just banning racist and sexist content, slope is only slippery because the salivating mouths of these social platforms grease them.
But what is "racist", exactly? See why I think it's a slippery slope and why it's ill-defined:
1. We could agree that "Let's go out and kill/enslave all the $race/$gender" is racist, but that's bad if we switch $race to any group, as it's speech that incites violence.
2. What about "$race is genetically inferior in a way (less intelligent, less athletic, more prone to $bad_behavior)"? I honestly think most differences in race/ethnicity is due to environmental factors, but what if there actually are difference in intelligence or anything like that? Should we ban speech that discusses that? Black people win running races and are great at basketball. They're prone to certain diseases, as are Caucasians or Asians. So would you ban discussing that? Or would you ban blindly asserting that $race is $Y without some sort of proof?
3. What about statements like "There are way more male bus drivers because X"? Or "men are better at Y, but women are better at Z"?
What do you think the definition of racism and sexism in this context should be? I think the line is where we incite violence towards a group, but not about discussing differences that may or may not be true.
> Also, I don't think people are advocating censorship, they are advocating not promoting assholes. You can have your little blog and be racist on it all you want, but let's not give these people equivalent of nukes for communication.
I think restricting a platform (or anyone or anything) from promoting someone IS censorship. If it's not censored, why shouldn't I be able to promote it? This honestly feels disingenuous - like "we pretend that the racist isn't censored and can have his little blog, but it's illegal to promote his little blog".
1 reply →
> I'm sorry but are you saying it's hard to figure out what to do so let's do nothing?
That seems more reasonable than the alternative, which is to make modifications to a complex system which you aren't sure what the outcome will be. You're more likely to cause bigger problems.
[dead]
regulation will never happen because these are instruments to control the masses
All the more reason for regulation. If people catch on to the fact that they are being manipulated and abused by the platforms to "drive engagement" they might abandon them or spend less time on them. If the government regulates these platforms so that they are safer or at least less harmful people will feel better about using them giving the government a larger platform to use to control the masses.
> If people catch on to the fact that they are being manipulated and abused by the platforms
I am not trying to be funny or anything but this sounds like "if only fat kid realized that eating 10 apple pies before bedtime might be the reason s/he is fat" We already know what social media platforms are doing, not to just young people but to all people.
> If the government regulates these platforms
This is like saying "congressman care about our debt so they will vote to reduce their own salaries by 90%" - the government is not going to regulate tools they are using to control the narrative/masses etc...