Comment by intended

8 days ago

Hey, this is part of my thesis and what I’m working towards figuring out.

People already working on LLMs to assist with content moderation (COPE). Their model can apply a given policy (eg harassment policy,) to a piece of content and judge if it matches the criteria. So the tooling will be made, one way or another.

My support for the thesis is also driven based on how dark the prognostications are.

We won’t be able to distinguish between humans and bots, or even facts soon. The only things which will remain relatively stable are human wants / values and rules / norms.

Bots that encourage pro social behavior, norms and more, are definitely needed just as the natural survival tools we will need.