Comment by zyxzevn
16 hours ago
The problem with social media (and all media) is opinion-based censorship, causing group-think. And the chaos of replies that are uncategorized.
Different opinions do matter. But due to the algorithms, the most emotional responses are promoted. There is no way to promote facts or what people think are facts.
So most discussion will be extremely emotional and not based on facts and their value. This is even true in scientific discussions.
Combined with group-think, these emotions can grow and lead to catastrophic outcomes.
> There is no way to promote facts or what people think are facts.
There is no way with existing platforms and algorithms. We need systems that actually promote the truth. Imagine if claims (posts) you see come with a score* that correlates with whether the claim is true or false. Such a platform could help the world, assuming the scores are good.
How to calculate these scores is naturally the crux of the problem. There's infinite ways to do it; I call these algorithms truth heuristics. These heuristics would consider various inputs like user-created scores and credentials to give you a better estimate of truth than going with your gut.
Users clearly need algorithmic selection and personalized scores. A one-size-fits-all solution sounds like a Ministry of Truth to me.
* I suggest ℝ on [-1,1].
-1 : Certainly false
-0.5 : Probably false
0 : Uncertain
0.5 : Probably true
1 : Certainly true
> The problem with social media (and all media) is opinion-based censorship, causing group-think. And the chaos of replies that are uncategorized.
All people are biased. It's impossible to also avoid bias needed to filter out the firehose of data.
What your describing is often a form of moderation.
> Different opinions do matter. But due to the algorithms, the most emotional responses are promoted. There is no way to promote facts or what people think are facts.
This is tuneable. We have tuned the algos for engagement, and folks engage more with stuff they emotionally react to.
People could learn to be less emotionally unstable.
> So most discussion will be extremely emotional and not based on facts and their value. This is even true in scientific discussions.
I think your over fitting. Moderation drives a lot of how folks behave in a community.
> Combined with group-think, these emotions can grow and lead to catastrophic outcomes.
Group think is also how we determined mamales are mamales and the earth isn't the center of the universe. Sometimes a consensus is required.
I am thinking of some moderation systems that focuses on categorization instead of censorship.
There will be a bias in moderation, but that will have less of an effect when there is no deletion. If possible, the user could choose their preferred style (or bias) of moderation. If you want full freedom, you can let users select "super-users" to moderate/categorize for them.
Emotional responses and troll jokes could be a separate categories as long they do not call for violence and or break other laws.
Consensus is still group-think. I think it is destructive without any clear view where it stands within other options or other ideas. Like: "why exactly is earth not the center". A lot of consensus is also artificial due to biased reporting, biased censorship and biased sponsorship. During discussions, people within a consensus tend to use logical fallacies. Like portraying the opposition as idiots, or avoiding any valid points that the opposition bring into the discussion.
I think that people have becomes less intelligent due to one-sided reporting of information. With extra information, people will become smarter and more understanding of how other (smart) people think.
>categorization
This exists on Bluesky under "labeling" name: https://news.ycombinator.com/item?id=39684027
> People could learn to be less emotionally unstable.
How does it make sense to make billions of people responsible for abating the consequences of choices made by a few social media companies?