Comment by pegasus

2 years ago

What he misses in this analogy is that part of what produces the "blur" is the superimposing of many relevan paragraphs found on the web into one. This mechanism can be very useful, because it could average out errors and give one a less one-sided perspective on a particular issue. It doesn't always work like this, but hopefully it will more and more. Also, even more useful is to do a cluster analysis of the existant perspectives and give a representative synthesis of each of these, along with a weight representing their popularity. So there's a lot of room for improvement, but the potential in my opinion is there.

If anything, the average has far more errors in it. It's a trope on Reddit that experts get downvoted while amateurs who reflect the consensus of other amateurs get upvoted and repeated. Amateurs tend to outnumber experts in real life anyways, having their opinions become more authoritative (because some "AI" repeats it) is probably not a great direction to head in.

  • But this issue is present equally with Google search, no? What I'm saying is that by smartly aggregating all the different opinions on an issue, a LLM could provide better visibility into our collective mind than we currently have access to by simply scanning the first few results of a search engine query. Let's not forget that we're looking at version 0.0001 on this new technology, so there should be lots of room for growth.

  • This is very frustrating. Some smaller communities on Reddit have a high concentration of domain experts and they're great.

    One community in particular is huge and mainly consists of beginners. They regularly drown out and downvote the few who actually know the subject matter. To add insult to injury they even mock experts with caricatures based around how they disagree with the amateurs.