Comment by Llamamoe
1 day ago
I feel like regardless of all else, the fact of algorithmic curation is going to be bad, especially when it's contaminated by corporate and/or political interests.
We have evolved to parse information as if its prevalence is controlled by how much people talk about it, how acceptable opinions are to voice, how others react to them. Algorithmic social media intrinsically destroy that. They change how information spreads, but not how we parse its spread.
It's parasocial at best, and very possibly far worse at worst.
No doubt the specific algorithms used by social media companies are bad. But what is "non-algorithmic" curation?
Chronological order: promotes spam, which will be mostly paid actors. Manual curation by "high-quality, trusted" curators: who are they, and how will they find content? Curation by friends and locals: this is probably an improvement over what we have now, but it's still dominated by friends and locals who are more outspoken and charismatic; moreover, it's hard to maintain, because curious people will try going outside their community, especially those who are outcasts.
EDIT: Also, studies have shown people focus more on negative (https://en.wikipedia.org/wiki/Negativity_bias) and sensational (https://en.wikipedia.org/wiki/Salience_(neuroscience)#Salien...) things (and thus post/upvote/view them more), so an algorithm that doesn't explicitly push negativity and sensationalism may appear to.
> Chronological order: promotes spam, which will be mostly paid actors.
If users chose who to follow this is hardly a problem. Also classical forums dealt with spam just fine.
> Also classical forums dealt with spam just fine.
Err... well, no, it was always a big problem, still is, and is made even more so by the technology of our day.
4 replies →
How will users choose who to follow? This was a real problem when I tried Mastodon/Lemmy/Bluesky, I saw lots of chronological posts but none of them were interesting.
Unfortunately, classical forums may have dealt with spam better because there were less people online back then. Classical forums that exist today have mitigations and/or are overrun with spam.
1 reply →
> Also, studies have shown people focus more on negative (https://en.wikipedia.org/wiki/Negativity_bias) and sensational (https://en.wikipedia.org/wiki/Salience_(neuroscience)#Salien...) things (and thus post/upvote/view them more), so an algorithm that doesn't explicitly push negativity and sensationalism may appear to.
This is exactly why it's a problem. It doesn't even matter whether the algorithm is trained specifically on negative content. The result is the same: negative content is promoted more because it sees more engagement.
The result is more discontent in society, people are constantly angry about something. Anger makes a reasonable discussion impossible which in turn causes polarisation and extremes in society and politics. What we're seeing all over the world.
And the user sourced content is a problem too because it can be used for anyone to run manipulation campaigns. At least with traditional media there was an editor who would make sure fact checking was done. The social media platforms don't stand for the content they publish.
It isn't just social media. I'm been identified as a republican and the pervious owners of my house democrats, and since forwardinu has expired I get their 'spam' mail. There names are different, but otherwise the mail from each party is exactly the same 'donate now to stop [other parties'] evil ageneda. they know outrage works and lean into it.
Fact checking with traditional media was always pretty spotty. Even supposedly high quality publications like the NY Times frequently reported fake news.
I've been curating my own feeds manually for decades now. I choose who to follow, and actively seek out methods of social media use that are strictly based on my selections and show things in reverse chronological order. Even Facebook can do thus with the right URL if you use it via the web[1].
You start with almost nothing on a given platform but over time you build up a wide variety of sources that you can continue to monitor for quality and predictive power over time.
[1] https://www.facebook.com/?sk=h_chr
> But what is "non-algorithmic" curation?
Message boards have existed for a very long time, maybe you're too young to remember, but the questions you're raising have very obvious answers.
They're not without issues, but they have a strong benefit: everyone sees the same thing.
I have wondered if it's not algorithmic curation per-se that is the problem, but personalised algorithmic curation.
When each person is receiving a personalised feed, there is a significant loss of common experience. You are not seeing what others are seeing and that creates a loss of a basis of communication.
I have considered the possibility that the solution might be to enable many areas of curation but in each domain the thing people see is the same for everyone. In essence, subreddits. The problem then becomes the nature of the curators, subreddits show that human curators are also not ideal. Is there an opportunity for public algorithm curation. You subscribe to the algorithm itself and see the same thing as everyone else who subscribes sees. The curation is neutral (but will be subject to gaming, the fight against bad actors will be perpetual in all areas).
I agree about the tendency for the prevalence of conversation to influence individuals, but I think it can be resisted. I don't think humans live their lives controlled by their base instincts, most learn to find a better way. It is part of why I do not like the idea of de-platforming. I found it quite instructional when Jon Stewart did an in-depth piece on trans issues. It made an extremely good argument, but it infuriated me to see a few days later so many people talking about how great it was because Jon agreed with them and he reaches so many people. They completely missed the point. The reason it was good is because it made a good case. This cynical "It's good if it reaches the conclusion we want and lots of people" is what is destroying us. Once you feel like it is not necessary to make your case, but just shout the loudest, you lose the ability to win over people who disagree because they don't like you shouting and you haven't made your case.
> the solution might be to enable many areas of curation but in each domain the thing people see is the same for everyone.
Doesn't this already happen to some extent, with content being classified into advertiser-friendly bins and people's feeds being populated primarily by top content from within the bins the algorithm deems they have an interest in?
> Once you feel like it is not necessary to make your case, but just shout the loudest, you lose the ability to win over people who disagree because they don't like you shouting and you haven't made your case.
To some extent, this is how human communication always worked. I think the biggest problem is that the digital version of it is sufficiently different from the natural one, and sufficiently influenceable by popular and/or powerful actors, that it enables very pathological outcomes.
Social media should be liable for the content that their automatic curation put forward. If a telecom company actively gives your number to scammers to call you up, they should not hide behind the argument that it is not them scamming you, but someone else. Applying regular anti-fraud and defamation laws will probably put an end to algorithmic curation.