Comment by m_0x

2 years ago

> The system promotes answers which the public believes to be correct

Well.. duh?

Until AI takes over the world, this will be correct for everything. News, comments, everything.

Mmm... no? StackOverflow is powered by voting. Not all forums work like that (it was a questionable choice at the time StackOverflow started).

I've been a moderator on a couple of ForumBB kind of forums and the idea of karma points was often brought up in moderator meetings. Those with more experience in this field would usually try to dissuade the less experienced mods from implementing any karma system.

Moderators used to have ways of promoting specific posts. In the context of ForumBB you had a way to mark a thread as important or to make it sticky. Also, a post by a moderator would stand out (or could be made to stand out), so that other forum users would know if someone speaks from a position of experience / authority or is this yet to be determined.

Social media went increasingly in the direction of automating moderator's work by extracting that information from the users... but this is definitely not the only (and probably not the best) way of approaching this problem. Moderators are just harder to make and are more expensive to keep.

I hold little hope that LLM's will help us to reason through "correctness." If these AI's scourge through the troves of idiocy on the internet believing what it will according to patterns and not applying critical reasoning skills, it too will pick up the band-wagon's opinions and perpetuate them. Ad Populum will continue to be a persistent fallacy if we humans don't learn appropriate reasoning skills.

  • They've already proven that LLMs are capable of creating an internal model of the world (or, in the case of the study that proved it, a model of the game it was being trained on). If LLMs have a world model, then they are fully capable of generating truth beyond whatever they are trained on. We may not be there yet (and who knows how long it will take), but it is in principle true that LLMs can move beyond their training data.

AI isn’t going to do better in current paradigms, it has exactly the same flaw.

Of course, consensus is a difficult philosophical topic. But not every system is based on public voting.