← Back to context

Comment by lobocinza

1 year ago

> Good point that the algorithm tries to compensate for the perspectives, but I’m sure it still comes down to a popularity contest.

It isn't clear if polarization score have only one dimension where it would be great to capture US culture wars but fail to capture nuances outside of that or if it's more complex than that.

> Generally, the platform determines the algorithm of which note wins, so that’s centralized. The algorithm depends on what kind and how many users vote and how. Those users exist on the platform which requires registration and can deny any given user. Centralized.

Yes, not perfect but still better than the traditional media oligopoly.

> Further, no guarantee that the actual algorithm in production matches the one made public, but I guess they have no reason to lie here.

The algorithm and the data are open. It is reproducible.

https://github.com/twitter/communitynotes

https://twitter.com/i/communitynotes/download-data

> It’s not just me telling, the tweet has been up so far and community note does its best to awkwardly convey the controversy that would’ve been otherwise completely lost due to the ill designed way community notes work.

I checked the tweet (https://twitter.com/MOSSADil/status/1745921315498811752) and at least now the community notes show a different content:

> Original video, which provides a better look at the plate (鄂F1573警),indicates that the video was shot in Hubei, China.

>> https://x.com/ken2009_/status/1227476618014330880?s=46

Many here are biased against Elon and Twitter for political reasons so they are too quick to pass judgement.

> Original video, which provides a better look at the plate (鄂F1573警),indicates that the video was shot in Hubei, China

This means there was yet another update since I looked. The previous note at least made the user suspect that there was an attempt to point fingers at some other country. Now it merely corrects the location in China. It is good to link to a higher quality video, but I don’t rule out that the note will with time drift to suit the agenda of the government.

> Many here are biased against Elon and Twitter for political reasons so they are too quick to pass judgement

He killed the feature that labeled accounts associated with governments though.

I think community notes are not his invention so I don’t blame him for them, but they are very poorly implemented and are strictly worse than tweets themselves.

If they applied the same algo to weighing tweets and replies, they could’ve gotten the same results but without making people trust blindly. But of course this defeats the point of paying for Elon’s blue checkmarks.

  • > This means there was yet another update since I looked. The previous note at least made the user suspect that there was an attempt to point fingers at some other country. Now it merely corrects the location in China. It is good to link to a higher quality video, but I don’t rule out that the note will with time drift to suit the agenda of the government.

    I didn't calculate any statistics but exploring the data I saw are way more "anti-China" notes than "pro-China" notes.

    > He killed the feature that labeled accounts associated with governments though.

    Yeah because Western government funded medias cried rivers when they were correctly labeled as such.

    > If they applied the same algo to weighing tweets and replies, they could’ve gotten the same results but without making people trust blindly. But of course this defeats the point of paying for Elon’s blue checkmarks.

    Doing so wouldn't make sense as the algorithm needs prior data from tweets to calculate ratings. What are your expectations? That Twitter hide (soft ban) tweets/accounts that an algorithm labels as misinformation because it was massively flagged? That happened before, it was easily abused, it was censorship.

    > I think community notes are not his invention so I don’t blame him for them, but they are very poorly implemented and are strictly worse than tweets themselves.

    You could give only one example where community notes were abused to spread misinformation and with time the correct note prevailed.

    • > exploring the data I saw are way more "anti-China" notes than "pro-China" notes.

      Maybe that is the problem, it is seen as pro/anti X instead of facts/lies.

      > You could give only one example where community notes were abused to spread misinformation and

      If you ask this, you miss the point. How exactly do you expect me to tell a true note from a false one? The medium is the problem here.

      > with time the correct note prevailed.

      As long as it prevails before the heat death of the universe that’s OK, right?

      > Doing so wouldn't make sense as the algorithm needs prior data from tweets to calculate ratings.

      Twitter has prior data from tweets. I don’t get it.

      The algorithm they use to present one community note can be used to capture feedback and sort tweets instead. Problem solved. People have better access to balanced views but are not being nannied by the platform or elonsplained what is truth.

      2 replies →