Comment by vintagedave
9 hours ago
I get this.
There’s a line on our doc page:
> Respectify is not an engine for monoculture of thought, but in fact intends to assist in the opposite while encouraging in healthy interaction along the way.
We don’t want to monitor or enforce saying specific things. We want people to be able to speak, but understand how others will hear them.
All those times people talk past each other. Or are rude but don’t realise it. Or are rude but don’t care (and should because it’s a human on the other end.) Or the worse people who intentionally say something awful and… just maybe can learn a bit about what they’re saying.
I get your fear. I think I’ve seen AI used for bad quite a bit. I hope, given the tech isn’t going away, we can use it to make things a bit better. That’s the goal.
Intent is immaterial if the output doesn’t match. The very nature of the product in attempting to coach commenters to argue in the “correct” way goes against your stated goals. This will encourage the kind of algo-speak self-censorship now common on TikTok etc, just more effectively because it at least tries to explain the rules.