Comment by hibikir

8 days ago

Since detecting LLMs is a silly end goal, the future of moderation probably needs LLMs too, but to evaluate text and see if it fits into blatant commercial speech. It will ruin places where some kinds of commercial speech is wanted (say, asking for a recommendation on reddit). Still, the mindless recommendation of crypto rugpulls and other similar scams will go away.

I am more concerned about voice alignment efforts, like someone creating over time 10k real-ish accounts attempt to contribute, but are doing so to just abuse upvote features to change perception. Ultimately figuring out what is a real measure of popularity , and what is just a campaign to, say, send people to your play is going to get even harder than it is now

> It will ruin places where some kinds of commercial speech is wanted (say, asking for a recommendation on reddit).

There is also a dependence on the culture. For example, what in the USA would be considered a "recommendation" (such as on Reddit) would often be considered "insanely pushy advertising" in Germany.

With this in mind, wouldn't a pertial solution also be to become less tolerant of such pushy advertisement in such places (say on Reddit), even if they are done by honest users?

  • When it's obvious that entire posts and users are fake, and knowing that product pages on Amazon (which are also sometimes fake) can change what product they list for sale, and since it is known that upvotes/likes/shares are openly for sale, is it really such a stretch to assume that all "recommendations" are as fake as the original question also likely is, until we have evidence to the contrary?