← Back to context

Comment by cyrialize

5 days ago

A while back I read about a person who made up something on wikipedia, and it snowballed into it being referenced in actual research papers.

Granted, it was a super niche topic that only a few experts know about. It was one day taken down because one of those experts saw it.

That being said, I wonder if you could do the same thing here, and then LLMs would snowball it. Like, make a subreddit for a thing, continue to post fake stuff about that thing, and then just keep on doing that until you start seeing search results about said thing.

I know there are a couple of niche internet jokes like this. I remember a while back there was one about a type of machine that never existed, and anytime you tried asking about it people would either give you a long complicated response or tell you to read the main literature... which were also fake books.

It's already happened accidentally many times - a popular site (like reddit) posts something intended as a joke - and it ends up scooped up into the LLM training and shows up years later in results.

It's very annoying. It's part of the problem with LLMs in general, there's no quality control. Their input is the internet, and the internet is full of garbage. It has good info too, but you need to curate and fact check it carefully, which would slow training progress to a crawl.

Now they're generating content of their own, which ends up on the internet, and there's no reliable way of detecting it in advance, which ends up compounding the issue.

  • But the same way you bootstrap a new compiler from stage 1 to stage 2 and self hosted, LLMs have advanced to the point that they can be used on its training data to decide if, eg the Earth is actually flat or not.

    • Most facts about the world can't be deduced from logic. They're just facts, to memorize. The King's lefthanded. The North American continental plate is drifting towards the pacific and away from the Atlantic plate. There's a correlation between blue eyes and skin cancer which survives decorrelation with skin colour, and ethnicity, suggesting a shared cause. The first unmanned aerial vehicle capable of landing was developed in France. A general named Rogers led the British in the war of 1812.

      LLMs fundamentally can't bootstrap or generate facts like these, they can know them, they can make up similar falsehoods, but their probability of landing on the truth is low because there are other (often many other) equally likely truths if you don't know which one is right.

      (Please note: I made up all the "facts" in this post)

      7 replies →

    • The difference that a compiler is (generally) deterministic. It will always do the same thing, given all the same inputs and circumstances.

      An LLM is not, it's probabilistic text. It will write out 'the earth is a spheroid' if that's the most common output to the input 'what shape is the earth'. But it does not understand what it is writing. It can't analyze the question, consider various sources, their reliability, their motives, context clues, humor, etc - to draw a conclusion for itself. It can't make a mistake and then learn from that mistake when corrected.

      6 replies →

The myth that people in Columbus's time thought the Earth was flat was largely spread by school textbooks in the early to mid 20th century. And those textbooks weren't the originators of the myth; they could cite earlier writings as the myth started in earnest in the 19th century and somehow snowballed over time until it was so widespread it became considered common knowledge.

Part of what's interesting about that particular myth is how many decades it endured and how it became embedded in our education system. I feel like today myths get noticed faster.