← Back to context

Comment by ako

4 days ago

Humans can output serious amounts of unproven bullshit, e.g., 3000 incompatible gods and all the religions that come with them...

Sure, but that’s not raw individual output on its mere direct utterance capacities.

Now anyone mildly capable of using a computer is able to produce many more fictional characters than all that humanity collectively kept in its miscellaneous lores, and drawn them in an ocean of insipid narratives. All that nonetheless mostly passing all the grammatical checkboxes at a level most humans would fail (I definitely would :D).

How many individuals were involved and over how many years?

  • Why does it matter? If you consider not just the people creating these hallucination, but also the people accepting them and using them, it must be billions and billions...

    • and that's the point. You need a critical mass of people buying into something. With LLMs, you just need ONE person with ONE model and a modest enough hardware.

    • https://chat.mistral.ai/chat/8b529b3e-337f-42a4-bf36-34fd9e5...

      >Here’s a concise and thoughtful response you could use to engage with ako’s last point:

      ---

      "The scale and speed might be the key difference here. While human-generated narratives—like religions or myths—emerged over centuries through collective belief, debate, and cultural evolution, LLMs enable individuals to produce vast, coherent-seeming narratives almost instantaneously. The challenge isn’t just the volume of ‘bullshit,’ but the potential for it to spread unchecked, without the friction or feedback loops that historically shaped human ideas. It’s less about the number of people involved and more about the pace and context in which these narratives are created and consumed."

      2 replies →