Comment by pjmorris

3 years ago

> It would be interesting if HN had some bucket like /offtopic, for things that are flamebaity and removed from the main view, but I fear it would attract the aforementioned people who only ever troll there, and dang probably having zero interest in mod'ing it.

What if, along with the '/offtopic' bucket there were participation criteria that could be moderated by other /offtopic participants? Not my wheelhouse, but something like '/offtopic' threads are only readable by HN members, only editable by members in good standing, presence of an 'evict the troll' button that disallowed further comments on a thread if enough users press it about a given comment/user.

> presence of an 'evict the troll' button that disallowed further comments on a thread if enough users press it about a given comment/user.

Such features will be abused by trolls, and that sooner than later. Case in point: Twitter's recent new policy about doxxing, that was instantly (as in, not even 24 hours after release) abused by a bunch of far-right mobs to silence BLM, antifa and feminist accounts.

Systems that ban people without a human (with decent training, context awareness and time to properly judge) in the loop should be straight out banned because of the abuse potential.

  • I'm going to contest that applying a policy against doxxing to people who were in fact doxxing should be described as "abuse". It would be more accurate to say that applying the policy fairly led to unanticipated consequences.

  • Those were done by human moderators, who Twitter said needed retraining.

    But yes, this also happened on, e.g. Facebook where activists were blaming "color blind" application of the rules for flagging a lot of things as "hate speech."

    • > Those were done by human moderators, who Twitter said needed retraining.

      Given how often Twitter has had issues with "AI" (like people getting shadow banned or blocked for posts years old coincidentally timed with PR releases about "how to combat xxx"), I don't trust that statement at all.

      Twitter, Facebook/Instagram and Google/YouTube are widely known to use AI as first stage of content moderation for years (want to try it? post a picture of genitalia on Twitter, and your account will be set to NSFW in a matter of seconds, or post something with a "copyrighted" audio part on YT), and people have exploited that for just as long. We've seen various complaints about unjustified bans on all services made #1 here on HN simply because the affected people don't have any other way to contact a human support resource.

      I do get that this might be necessary out of scale - if fifty people flag something for abusive content or spam, it likely is abusive content... the problem is the 1% that are the target of organized reporting/trolling campaigns, and for these people, that really really sucks.