← Back to context

Comment by netsharc

16 hours ago

AI enhanced language monitor, what a double plus good improvement for society!

I get this.

There’s a line on our doc page:

> Respectify is not an engine for monoculture of thought, but in fact intends to assist in the opposite while encouraging in healthy interaction along the way.

We don’t want to monitor or enforce saying specific things. We want people to be able to speak, but understand how others will hear them.

All those times people talk past each other. Or are rude but don’t realise it. Or are rude but don’t care (and should because it’s a human on the other end.) Or the worse people who intentionally say something awful and… just maybe can learn a bit about what they’re saying.

I get your fear. I think I’ve seen AI used for bad quite a bit. I hope, given the tech isn’t going away, we can use it to make things a bit better. That’s the goal.

  • Intent is immaterial if the output doesn’t match. The very nature of the product in attempting to coach commenters to argue in the “correct” way goes against your stated goals. This will encourage the kind of algo-speak self-censorship now common on TikTok etc, just more effectively because it at least tries to explain the rules.

Nick Hodges here -- one of the developers.

I get that objection, and we are certainly very uninterested in that becoming the norm. The idea, of course, is to try to prevent comments that we want prevented and that aren't helpful.

Different bloggers and different communities are going to define that differently. That is why we are making a good-faith effort at allowing sites/people/groups to tweak this as desired.

Thank for your feedback.