Bluesky's stackable approach to moderation

2 years ago (bsky.social)

I'm on the team that implemented this, so I'm happy to answer questions. I'll give a brief technical overview.

It's broadly a system for publishing metadata on posts called "Labels". Application clients specify which labeling services they want to use in request headers. Those labels get attached to the responses, where they can then be interpreted by client.

This is an open system. Clients can choose which labelers they use, and while the Bluesky client hardcodes the Bluesky moderation another client can choose a different primary Labeler. Users can then add their community labelers, which I describe below. We aim to do the majority of our moderation at that layer. There are also "infrastructure takedowns" for illegal content and network abuse, which we execute at the services layer (ie the relay).

Within the app this looks like special accounts you can subscribe to in order to get additional filters. The labels can be neutral or negative, which means they can also essentially function as user badges. Over time we'll continue to extend the system to support richer metadata and more behaviors, which can be used to do things like community notes or label-driven reply gates.

  • This sounds like a good approach. Pretty much exactly this "opt-in/out, pluggable trust moderation" is something I'd thought about a number of times over the years, yet I'd never come across the relatively simple idea implemented in the real world until now

    Do you/anyone reading know of any prior work? The closest I know of is this site, in fact, which is opt-out but not pluggable. Or maybe email spam filters, from the POV of the server admin at least

    • There aren’t a lot of exact matches that I’m aware of. Spam filters, ad blockers, Reddit, mastodon, and block party all came up during the discussions.

      2 replies →

  • I have been loosely following Bluesky for awhile and read some blog posts now but haven't delved super deep. Can you expand on the "infrastructure takedowns"? Does this still effect third party clients? I am trying to understand to what degree this is a point of centralization and open to moderation abuse versus bluesky acting as a protocol and even if we really want to we can't take something down other than off our own client.

    • The network can be reduced to three primary roles: data servers, the aggregation infrastructure, and the application clients. Anybody can operate any of these, but generally the aggregation infra is high scale (and therefore expensive).

      So you can have anyone fulfilling these roles. At present there are somewhere around 60 data servers with one large one we run; one aggregator infra; and probably around 10 actively developed clients. We hope to see all of these roles expand over time, but a likely stable future will see about as many aggregator infrastructure as the Web has search engines.

      When we say an infrastructure takedown, we mean off the aggregator and the data server we run. This is high impact but not total. The user could migrate to another data server and then use another infra to persist. If we ever fail (on policy, as a business, etc) there is essentially a pathway for people to displace us.

      8 replies →

  • Obviously this is a highly moderation-averse crowd so I figured I’d add one small voice of support: I was very impressed by this post and your comment, and think this is a huge jump ahead of reddits mediocre system, or god forbid whatever’s going on at Twitter rn. This made me much more interested in Bluesky, and I might even click around your careers page.

    In particular, applying Steam Curator functionality to content moderation is just perfect and I can’t believe I didn’t think of it before.

  • How will content that is illegal in some jurisdictions and legal in others be handled? Is there a presumed default jurisdiction, like California, or something?

    • Their stackable moderation system might actually allow one to implement this relatively easily.

      Add a moderation channel per country and let clients apply them depending on location/settings. It's naturally not perfect, but as one can just travel to other countries and get their (potentially less restricted) view or even simpler use a VPN, it's as good as basically any other such censorship measurement.

      7 replies →

    • I’m unsure how it will play out in practice. I think it’s possible that different infra could wind up being deployed in jurisdictions that differ too significantly. Certainly that could happen outside of Bluesky.

      Bluesky itself is US-based.

      5 replies →

  • Seems like a really, really good way to create a really, really boring website.

    ETA: Rereading this, that is probably not a very helpful HNy comment, so let me elaborate.

    Maybe I am old-fashioned, but one of the things that the internet is most useful for is exploring places and ideas you would otherwise never encounter or consider. And just like taking a wooden ship to reach the North Pole, browsing around the internet comes with significant risk. But given the opportunity for personal growth and development, for change, and so on, those risks might well be worth it.

    That model of the internet, as I said, is somewhat old-fashioned. Now, the internet is mostly about entertainment. Bluesky exists to keep eyeballs on phones, just like Tiktok or Instagram or whatever. Sure, Bluesky is slightly more cerebral -- but only slightly.

    People are generally not entertained by things that frustrate them (generally -- notable exceptions exist), so I can understand an entertainment company like Bluesky focusing on eliminating frustrations via obsessive focus on content moderation to ensure only entertaining content reaches the user. In that sense, this labeling thing seems really useful, just like movie ratings give consumers a general idea of whether the movie is something appropriate for them.

    So in that sense, wonderful for Bluesky! But I think I'll politely decline joining and stick with other platforms with different aims.

    • What I want is a filter for angry posts. Social media exposes me to a wider cross section than I get in person and there is really a limit to the amount of distress I can absorb.

      2 replies →

    • The internet isn't one size fits all, all the time. Most people don't want to be challenged all the time and everywhere. Sometimes you want to watch a challenging documentary about socioeconomics in 17th century Poland and other times you want to watch Friends. I see a good use case here for BlueSky allowing users to vary moderation & use curated lists to separate interests & moods.

      1 reply →

    • I think I can have lively, intellectually stimulating exposure without say, someone advocating for the mass killing of gay people. Or engaging in an interesting political discussion without bad-faith conspiracy theorists shitting up the place. For example, the “chiller” which as far as I know is just designed to cool down a hot button discussion actually sounds super amazing for this purpose.

      One of the things that frustrated me about browsing twitter now is the constant bad faith discussions about everything, one-off potshots that waste pixels and lead nowhere. A moderation tool that sifts that and just gets me to the people that actually know wtf they’re talking about and are engaging honestly would benefit me greatly!

      8 replies →

  • I need to moderate the moderators.

    Not in a 'I can ban these moderators from moderating my instance' way. I need a metamoderatorion mechanism. I need to see how good moderators are to establish trust and when a moderator is taken over by a hostile actor I need to see its score tank.

    Do you have something like this on the roadmap?

    • It sounds like, perhaps, the moderators only label the content. Then it’s up to your own client (and how you configure it) to filter the content, based on those labels.

      If I’ve got that right, then a client could be created that, e.g., displays labels from different moderators rather than filter the content. In fact, I’d guess most clients will have that mode.

      7 replies →

  • It seems to me that the relay is still a single point of failure here for moderation. What happens if my PDS gets blocked by the relay, for reasons that I disagree with? (Let's assume the content I post is legal within my jurisdiction). Are there any separate relays that I can use?

    I think what might be needed here is that anyone with enough resources can run their own relay, and PDSes can subscribe to multiple relays and deduplicate certain things.

    • > I think what might be needed here is that anyone with enough resources can run their own relay, and PDSes can subscribe to multiple relays and deduplicate certain things.

      That is how it's designed, yes!

  • Just learned about Bluesky’s labelling approach. The first thing comes to mind is who is responsible for the content on the platform - Bluesky? Labellers?

    For example, some rouge user starts posting offensive content about other users, on the brink of breaking the law. Let’s say these other users will mention it to labellers, who this time will refuse to take this content down.

    Can you tell me what will happen in such scenario?

    • >For example, some rouge user starts posting offensive content about other users, on the brink of breaking the law. Let’s say these other users will mention it to labellers, who this time will refuse to take this content down.

      Under US law, the user posting the content is the only one legally responsible for it. Someone hosting the content could be required to take it down by court order or other legal process (like under the DMCA SH provisions) if subject to US jurisdiction. Bluesky is, so they'd have a process same as anyone else in the US, and of course could make their own moderation decisions regardless on top. But the protocol allows 3rd parties to take on any role in the system technically (though certain infra roles sound like they'd be quite expensive to run as a practical matter), so they could be subject to different law. Foreign judgements are not enforceable in the US if they don't meet the same bar of the 1st Amendment a domestic one would have to.

      Labellers from the description would never have any legal responsibility in the US, and they do not "take content down", they're only adding speech (meta information, their opinion on what applies to a given post) on top, best-effort. Clients and servers then can use the labels to decide what to show, or not.

      At any rate "on the brink of breaking the law" would mean nothing, legally. And "offensive" is not a legal category either. Bluesky or anyone else would be free to take it down anyway, there is zero restriction on them doing whatever they want and on the contrary that itself is protected speech. But they would be equally free to not do so, and if someone believed it actually broke one of the very limited categories of restrictions on free speech and was worth the trouble they'd have to go to court over it.

      1 reply →

  • Honestly, something here doesn't quite sit right with me.

    From the article:

    > No single company can get online safety right for every country, culture, and community in the world.

    From this post:

    > There are also "infrastructure takedowns" for illegal content and network abuse, which we execute at the services layer (ie the relay).

    If there's really no point in running relays other than to keep the network online, and running relays is expensive and hard work that can't really be funded by individuals, then it seems like most likely there will be one relay forever. If that turns out to be true, then it seems like we really are stuck with one set of views on morality and legality. This is hardly a theoretical concern when it comes to the Japanese users flooding Bluesky largely out of dissatisfaction with Twitter's moderation of 'obscene' artworks.

    • Before the Elon event (and maybe again now), Pawoo was by far the most active Mastodon instance, and there's an almost complete partition between ‘Western’ and ‘Eastern’ Mastodon networks.

      5 replies →

  • Reddit's subreddit structure and the underlying moderation system is quite scalable: site admins only deal with the things that subreddit moderators have failed to. And, in case they keep failing, admins can shut down the subreddit or demote moderators responsible for it. The work is clearly split between admins and mods, and mods only work on the content they're interested in.

    Now, with this model, I don't see such a scalable structure. You're not really offloading any work to moderation, and also, all mods will be working on all of the content. No subreddit-like boundaries to reduce the overlaps. I know, mods can only work on certain feeds, but, feeds overlap too.

    It's also impossible to scale up mod power with this model when it's needded: For example, Reddit mods can temporarily lock posts for comments, lock subreddit, quarantine subreddit to deal with varying degrees of moderation demand. It's impossible to have that here because there can't be a single authority to control the content flow.

    How do you plan to address these scalability and efficiency issues?

    • >Reddit's subreddit structure and the underlying moderation system is quite scalable: site admins only deal with the things that subreddit moderators have failed to.

      All that happens is mods just lock any post with any hint of a problem. It's become or rather started out as being ridiculous. They just lock instead of moderate.

      3 replies →

    • > all mods will be working on all of the content. No subreddit-like boundaries to reduce the overlaps

      Not necessarily, that's up to the moderator.

      Today, I subscribe to the #LawSky and AppellateSky feeds because I am interested in legal issues. Sometimes these feeds have irrelevant material: either posts who happened to use the "" emoji for some non-legal reason or just people chatting about their political opinions on some legal case.

      Someone could offer to label JUST the posts in these feeds with a "NotLegalTopic" tag and I would find that filter quite useful.

    • > You're not really offloading any work to moderation

      I think everyone at some stage has been burnt by top-down moderation (e.g., overzealous mods, brigading, account suspensions, subreddit shutdowns, etc.) and generally everyone finds it lacking because what's sensitive to one person, might be interesting to another. Community driven moderation liberalizes this model and allows people to live in whatever bubble they want to (or none at all). This kind of nit-picky moderation can be offloaded in this way, but it doesn't obviate top-down moderation completely (e.g., illegal content, incitement to violence, disinformation, etc.) Though a scoring system could be used for useful labellers, and global labels could be automated according to consensus (e.g., many high-rated labellers signalling disinformation on particular posts)

  • Moderation does not sound like an additive function, i.e multiple moderation *filters" that add up to the final experience. That seems an almost usenet like interaction, where each user has its own skizoid killfile and the default experience is bad.

    Rather, moderation is a cohesive whole that defines the direction of the community, the same rules and actions apply to everybody.

    • This was a very active topic of debate within the team. We ended up establishing the idea of "jurisdictions" to talk about it. If a moderation decision is universal to all viewers, we'd say that it's under a specific jurisdiction of a moderator. This is how a subreddit functions, with Reddit being the toplevel jurisdiction and the subreddits acting as child jurisdictions.

      The model of labelers as we're releasing in this first iteration is, as you say, an additive filtration system. They are "jurisdictionless." We chose this model because Bluesky isn't (presently) segmented into communities like Reddit is, and so we felt this was the right way to introduce things.

      That said, along the way we settled on a notion of the "user's personal jurisdiction," meaning essentially that you have certain rights to universally control your own interactions. Blocking is essentially under this umbrella, as are thread gates (who can reply). What's then interesting is that you can enlist others to help run your personal jurisdiction. Blocklists are an example of that which we have now: you can subscribe to blocks created by other people.

      This is why I'm interested in integrating labels into threadgates, and also exploring account-wide gates that can get driven by labels. Because then it does enable these labelers to apply uniform rules and actions to those who request it. In a way, it's a kind of dynamic subreddit that fits the social model.

      3 replies →

    • Once you support delegating your killfile to other people it no longer functions the same as each user having their own. And FWIW, as an example, here on Hacker News, many of us have showdead turned on all the time and so while I am aware of the moderation put in place by the site, I actually see everything.

      Also: frankly, if there were someone willing to put a lot of effort into moderating stuff Hacker News doesn't--stuff like people asking questions you can answer in the article or via Google--I would opt into that as I find it wastes my time to see that stuff.

      And with delegation of moderation, I think it will start to feel like people voting for rulesets more than a bunch of eclectic chaos; if a lot of people agree about some rule that should exist, you will have to decide when you make a post how many people you are willing to lose in your potential audience.

  • How are labels managed? Assume I'm a labeller labeling certain posts as "rude". Will my "rude" label be identified as the same label as other labellers who also label posts as "rude", or will they be tied to my labeller identity (actually being e.g. "xyz123-rude" under the hood)?

  • > and while the Bluesky client hardcodes the Bluesky moderation

    And what if I don't like your moderation? Can it be overruled, or is this just a system for people who want stricter moderation, not lighter?

    • Sounds like the answer is in the next part of that sentence:

      > another client can choose a different primary Labeler

      So you can overrule the moderation, but not if you use the official client.

    • First, we've built our own moderation team dedicated to providing around-the-clock coverage to uphold our community guidelines.

      A partial answer* is that the Bluesky moderation enforces their community standards, so if you don't like that, then the platform may not be for you.

      * - because, yes, this does still have a single entity at fundamental control. But I presume their focus is on the basic threshold (ie legality under US law) of content.

      3 replies →

  • On a side note I really like the freedom to speak and freedom to ignore approach, it's the thing i cherish the most about internet communication- the ability to be the free and uninhibited self

  • Do you plan to add custom labels?

    Let’s say we ask a question to a politician and they ignore it.

    Can we label the question as unanswered, so clients will remind users the question is unanswered?

  • Maybe add some screens of actual moderation on the landing? It looks like it just shows profile settings?

I can see many benefits to this approach, but there is one area where I'm sceptical:

The article mentions moderation in different cultures several times, and this made me curious – what would be the "default" culture assumed by the site-wide moderation? (Unitrd States?)

Will the moderation team paid for by blue-sky be responsible for all languages or only English posts, and what would it mean to be from a culture without "official support"?

This seems very similar to the idea of Mastodon's server system, where you join the server whose policies match yours, except it's much easier to switch "servers". Which is a really good idea.

  • It's actually pretty substantially different in that mastodon's instances/servers are everything whereas here each part is separate and you can generally use multiple of each.

    Bluesky has:

    Identity via DIDs. With web DIDs this identity is tied to your domain name and there is no bluesky infrastructure associated with it. It's 100% in your hands (but you can't change names or domains easily). But alternatively you can use a plc DID which does use their centralised infrastructure (currently) while allowing you to easily change your name or domain. With these you get one DID per account.

    PDS (personal data servers) or "data repositories" that hold your post content. You can self host this or use Bsky's central PDS. You can only use one of these per account.

    Indexers that aggregate posts from all the PDS/data repos. These create what bluesky calls "the firehose".

    Relays that route and cache traffic.

    App views which give you your actual application like bluesky.

    Feed services that give you your "algorithm" for what your page looks like. You can subscribe to as many of these as you like or host your own.

    Then you finally get labellers like what is discussed here. Compared to with mastodon you can follow multiple labellers at the same time. Those labellers can provide automated content warnings, etc or manual moderation. But importantly at the end of the day no matter what the labellers do, you control how they act and whether they are just a warning/blur or if they actually hide the content.

    ----

    That all compares to mastodon where everything is tied up under one server/instance and your control over moderation boils down to "run your own instance and do all the mod work yourself" or "rely on some other instance with no oversight whatsoever".

    This isn't to say that mastodon's approach didn't make sense at the time but like you said the bluesky approach makes quite a bit more sense and makes it way easier for the user to move around between their options.

    • The advantage of Mastodon is that you can actually run the whole thing on your own to not be forced under anyones moderation whereas with bluesky there are still central parts run by a US corporation that will censor you.

      Not saying that the fediverse is a great design - tying identities to instances is inexcusable. But Bluesky's central corporate backing makes it a nonstarter.

      2 replies →

  • In theory, but from what I understand Mastodon is rife with inter-server blocking, so your admin might just decide you're not allowed to even read posts from a condemned server (because aggregation is done server-side, unlike say RSS). And simply not blocking certain servers is enough to get your own blacklisted. Making it less of a network and more of a graph with several isolated networks that you have to exclusively choose between

    • The UX if you're on the side of a block is really bad on Mastodon too. You can be following a bunch of people, and they're following you, and suddenly they can't see your posts because you're on the wrong side of a one-way server block.

This is the way it should be. People that post things I don’t want to see can be blocked. Groups of people that post things I don’t want to see can be blocked. Conversely, people and groups I do want to see posts can be identified and cleared for my timelines.

  • Worth noting this is not truly "choose your own" moderation. People who do things like advocate for a lower age of consent will still be banned: https://bsky.social/about/support/community-guidelines

    EDIT: though it seems like you could in theory make your own client... I'm unclear how that would work in practice. Presumably content they dislike would not be stored on BlueSky servers?

  • So here’s the fun part of content moderation - you have to let communities talk about bad things as well.

    You cant have your own community banned for talking about something bad that happened to them.

    I would be curious how that scenario plays out.

    Perhaps the content is only greyed out. What do you do about users though? Is their content greyed out?

    • Nothing is stopping the community from talking about bad things. If you want to opt out of certain topics, that doesn't stop anyone from doing their own thing, it just stops you from seeing them. If you choose to allowlist your friends, you'll see whatever they post. If you blanket hide any posts that mention something, that's entirely your prerogative.

This is a really cool idea. I love the "Spider Shield" example they came up with. I look forward to this being used for all kinds of moderation-adjacent things like spoilers, and topics you don't care to read about for the time being.

This is honestly pretty great. It's the official scalable version of those browser add-ons that let you recursively block people, block twitter blue users, block anyone who's posted in x subreddit, masstagger. Big fan of making programmable blocklists a first-class feature.

I don't think it's going to save them from https://www.techdirt.com/2022/11/02/hey-elon-let-me-help-you... this kind of drama but it's nonetheless welcome. It will be interesting to see how people react to ending up on a popular 3rd party blocklist.

  • > block anyone who's posted in x subreddit

    Careful. Just because someone posts in /r/conservative or /r/liberal does not necessarily make them either.

  • What's recursive blocking?

    • Yep, I stopped using Twitter so I don't need them anymore but they were a godsend at avoiding internet drama and discourse I had no desire to engage in. You pick someone famous at the center of the drama and let the algo do its thing blocking them, all their followers, all their followers followers and so on. It's a coarse heuristic to be sure but operating on the principle that I can live without any given person's tweets it made discovery so much nicer.

  • Can you name some such extensions? I was thinking about similar functionality and considered to create a similar service / extension / etc, even experimented a little long ago. Interesting to know what exists.

So, is this getting incentives right?

Not really, IMO. Even when users select moderation, they should be able to transiently unselect moderation features to get the unfiltered view - to take off the glasses at any time. Otherwise, moderation can devolve to censorship in ideological contexts, and BlueSky becomes a friend to those herding opinion.

Even if users can take off their glasses, moderation will likely still suppress production of content. (Why speak if no one's listening?) But by enabling users to experiment with turning off moderation, each user will always be able to assess whether the moderation is supportive or censoring. That in turn can temper the moderators to be reasonable, since their moderation is easily discoverable and measurable.

I think this is good for groups, too. If group-wide moderation rules are definitive and users cannot peek through them, then moderation itself is a powerful position, and you'll get the usual leadership contests as people game to manage opinion. If moderation instead is weakened by being ever-consensual, there will be less gaming to control opinion.

It's better for groups to manage themselves through membership. It may be hard to get in, but once in, your voice is hearable by all, moderation notwithstanding. So weak moderation also makes group membership more valuable, which in turns makes it easier to have the conditions of entry that can help people trust each other. (Making those conditions observable and consistently applied is another question.)

I’ll be honest. I really liked the BlueSky idea and was an early adopter. But I deleted the app last week when you announced your Trust & Safety team, or at least hired a new leader for it.

In my mind (and many other people), “Trust and Safety” is a synonym for censorship.

We live in a time where censorship is becoming a huge risk to freedom. A free society needs to be able to debate, and anyone or anything that decides that it has the wisdom to set limits about what people can discuss, is a threat to freedom.

I had hope that BlueSky was choosing a different path but now I guess not.

  • If I understood the article correctly, you can get a feed that isn't moderated by their team. So I'm not sure what your complaint is? That other people can get moderation if they choose to?

Does the Bluesky blog have an RSS feed?

I'm sure our extremely divided society will benefit greatly from configurable echo chambers.

  • It unironically will. Instead of locking everyone in the same room just let people have their own spaces, and everyone can get out of each others hair. I'll never understand when the excellent idea of localism was turned into the derogatory term 'echo chamber'.

    Unlike in the physical world there's not even a limit to land, cyberspace is infinite. Don't understand why people who can't stand each other are so eager to get in each others face.

  • No-one’s come up with an obviously-correct, completed, broad-appeal social network design so far. Twitter’s approach worked to some extent, for a little while, but is clearly now utterly broken. Anyone who’s ever spent any time on an internet forum (broadly defined) knows that there has to be some kind of moderation. Personally, I’m simply not going to be using a social network where I’m subjected to routine transphobia – not because I don’t want to “hear both sides” but because I already have the firmest possible position against it and I don’t need to hear any more. Maybe a general solution doesn’t exist, but I applaud Bluesky for giving this a go.

  • These seem more like notch filters and I’m hoping they work. At least they are trying something, and a large scale experiment can only help figure this out (even through failure).

Bluesky still retains final cut with centralized censorship. This is a problem that strikes directly at the heart of the danger of social media.

Also, calling them “community guidelines” instead of “unilateral censorship rules” is deceptive.

  • It's only with their clients; if you use a different client you don't have to use their centralised filter. They need some form of censorship in their own client for plausible deniability so they don't get shut down (or face extreme deplatforming like Gab).

    • Which is a serious issue still to consider.

      The previous playbook was for the small handful of major "social" media platforms to be infilrated-captured by bad actors, including but not limited to how pre-Elon Twitter was illegally working with the US government to suppress and censor free speech - many people and experts who were suppressed or outright ban from the platform - who were solely posting talking points that are now proven-known to be correct but were against the COVID-19 narrative.

      So in this new evolution there will be how many different clients bad actors will attempt to nudge the system design to benefit a censorship-suppression-narrative control apparatus - even as simple as helping facilitate the creation and reenforcement of information bubbles.

      E.g. the establishment's goal or directive will be to direct as many people towards the captured or vulnerable-capturable clients that have gained a critical mass, and as we can see there is a campaign to drive people off of Twitter-X - arguably in an effort to prevent people from seeing Community Notes that may start users from developing their critical thinking and seeing different perspectives, where going on more of a wild west platform or network like Bluesky et al - can allow bubbles within bubbles to form, where bubbles will be far less likely to ever be exposed to general consensus Community Notes.

      At minimum people need to be educated of and aware of these isolation tactics - part of divide and conquer: you need people to have wildly differing beliefs, perpetuating fear mongering to weaponize an ideological mob - to the point of having them turn a blind eye to the actions of or becoming Gestapo themselves.

      Reddit's forum design has similar dark patterns that allows for easy narrative control and crafting, creating bubbles of acceptable argument points - and to keep people with incongruent beliefs from being exposed to logic and evidence that would cause cognitive dissonance and start to break the illusion for them.

      I think part of what can help manage the weaponizable ideological mob of mobs being isolated is to create a public matrix outlining accounts and displaying what content-keywords, and also perhaps who specifically they are "outlawing" - which then can allow a concerted effort to reach those people to do the hard work that takes more than keyboard warrioring to reach them.

      This online censorship-suppression-narrative control of course spills over into the real world once the control desperate establishment's apparatus isn't sufficient - where people like the Tate brothers continue to have a platform to reach people to educate them on certain structures of reality, most notably the strategies of the establishment attempting to maintain the narrative control; and so they attempt other tactics to suppress the truth like the assassination of Jamal Khashoggi, the cancellation attempts of Alex Jones, the imprisonment attempts of the Tate brothers because the ongoing smear campaign is failing; whereas Julian Assange was successfully imprisoned, and Edward Snowdon has been exiled - all for being a threat to or shining light on-exposing the inner workings of what many are calling the "deep state" or the establishment.

  • This post says you can opt put of their moderation filter, but yes if you are using bluesky's app then bluesky can control what can and can't be shown. In that case you can use a different client or PDS.

  • You're never going to be allowed to post CSAM on bsky.app, stop asking for this.

    • What makes you equate a fear of censorship with promotion of child sex abuse? This is an old trope and not based in fact.

      Google “twitter files”. Centralized censorship is a risk to a free society.

Very nice, this is an idea i've posted about before, and i'm glad someone is implementing it. Very interested to see how well it works out!

I don't get why this social media exists, twitter was pretty bad before he sold it, I remember he went out of his way to hide the feed of people you actually follow, and people were just abusing keywords to get on my timeline with garbage.

Trends were also insanely manipulated.

I for one will not and won't even remotely consider using another of his "social medias", it must be fun for him to go sell his garbage-fied social media make billions then attempt to create the same exact thing and have sheep follow him to it, get rug pulled every few years so he inflates his pockets, no thank you.

  • Jack Dorsey does not have a stake in BlueSky. He has an advisory board seat, but deleted his account. He’s invested in nostr, not BlueSky.

    • But he did set the mission and goals for bluesky, thus determining it's long-term direction. Personally, I think it's the wrong direction in terms of what's best for society. Source - I was one of the first members in the private bluesky group he and Parag set up.

      2 replies →

[flagged]

  • Can you please stop posting these? It's not what HN is for, and we're getting complaints.

  • Is this LLM generated? It sounds like it

    • I don't think so, but it looks like it was worded by a marketeer in the style of a press release or a linkedin post. Very impersonal / nonhuman.

      edit: comment history says a lot of their comments are suspected chatgpt.

  • +5, Inspiring

    Are there any platforms from the 21st century which support metamoderation?

just another few years of solving moderation before you can finally add support for basic media types

Another term for moderation is censorship.

Something that has been well demonstrated in existing social networks. but as long as whatever is a node in BlueSky is of limited reach / users then it is not much of a problem.

I do see this creating a lot more safe spaces/ echo chambers and even more olarization for all forms of opinions, both politically correct and politically incorrect, unless the company enforces one side of the other in a specific way.

  • The discussion of moderation always falls into the pit of discussion echo chambers. And while that's a problem, it's perhaps better to keep it first on the topic of objective abuse moderation (basically: how to prevent 99% of content from being spam that no human user wants). Since that has to be solved regardless of whether there is any further moderation beyond that, the moderation discussion can be about that first. There is no echo chamber created by removing botspam.

    Next, you can have moderation that is strictly legal. Basically how do you moderate things that courts are finding illegal. How do you reconcile that courts in different jurisdictions can disagree and so on.

    Finally - and optionally - comes the step of moderating for bad behavior, disinformation and so on. That is, human behavior that has a negative impact but falls short of being obviously illegal. And that's a difficult topic, as we have seen with facebook/twitter and elections for example. But it's important to remember that moderation isn't just this. By volume, the kind mentioned in the first paragraph is far larger.

  • [flagged]

    • That isn't the distinction between censorship and moderation [1].

      Censorship is the removal or blocking of speech, moderation is a broader term that can include practices like flagging content without removing or hiding it completely.

      The state actor distinction in the US is only important when deciding if a party is bound by the First Amendment. My speech is protected from government censorship, not from censorship by a private company.

      [1] https://publicknowledge.org/content-moderation-is-not-synony...