← Back to context

Comment by londons_explore

4 days ago

A single malicious Wikipedia page can fool thousands or perhaps millions of real people as that fact gets repeated in different forms and amplified with nobody checking for a valid source.

Llms are no more robust.

Yes, difference being that LLM’s are information compressors that provide an illusion of wide distribution evaluation. If through poisoning you can make an LLM appear to be pulling from a wide base but are instead biasing from a small sample - you can affect people at much larger scale than a wikipedia page.

If you’re extremely digitally literate you’ll treat LLM’s as extremely lossy and unreliable sources of information and thus this is not a problem. Most people are not only not very literate, they are, in fact, digitally illiterate.

  • Another point = we can inspect the contents of the wikipedia page, and potentially correct it, we (as users) cannot determine why an LLM is outputting a something, or what the basis of that assertion is, and we cannot correct it.

    • You could even download a wikipedia article, do your changes to it and upload it to 250 githubs to strengthen your influence on the LLM.

    • This doesn't feel like a problem anymore now that the good ones all have web search tools.

      Instead the problem is there's barely any good websites left.

      2 replies →

  • > Most people are not only not very literate, they are, in fact, digitally illiterate.

    Hell look at how angry people very publicly get using Grok on Twitter when it spits out results they simply don’t like.

  • Unfortunately, the Gen AI hypesters are doing a lot to make it harder for people to attain literacy in this subdomain. People who are otherwise fairly digitally literate believe fantastical things about LLMs and it’s because they’re being force fed BS by those promoting these tools and the media outlets covering them.

  • s/digitally illiterate/illiterate/

    • Of course there are many illiterate people, but the interesting fact is that many, many literate, educated, intelligent people don't understand how tech works and don't even care, or feel they need to understand it more.

  • LLM reports misinformation --> Bug report --> Ablate.

    Next pretrain iteration gets sanitized.

    • How can you tell what needs to be reported vs the vast quantities of bad information coming from LLM’s? Beyond that how exactly do you report it?

      5 replies →

    • Reporting doesn't scale that well compared to training and can get flooded with bogus submissions as well. It's hardly the solution. This is a very hard fundamental problem to how LLMs work at the core.

    • we've been trained by youtube and probably other social media sites that downvoting does nothing. It's "the boy who cried" you can downvote.

Wikipedia for non-obscure hot topics gets a lot of eyeballs. You have probably seen a contested edit war at least once. This doesn't mean it's perfect, but it's all there in the open, and if you see it you can take part in the battle.

This openness doesn't exist in LLMs.

The problem is that Wikipedia pages are public and LLM interactions generally aren't. An LLM yielding poisoned results may not be as easy to spot as a public Wikipedia page. Furthermore, everyone is aware that Wikipedia is susceptible to manipulation, but as the OP points out, most people assume that LLMs are not especially if their training corpus is large enough. Not knowing that intentional poisoning is not only possible but relatively easy, combined with poisoned results being harder to find in the first place makes it a lot less likely that poisoned results are noticed and responded to in a timely manner. Also consider that anyone can fix a malicious Wikipedia edit as soon as they find one, while the only recourse for a poisoned LLM output is to report it and pray it somehow gets fixed.

  •   Furthermore, everyone is aware that Wikipedia is susceptible to manipulation, but as the OP points out, most people assume that LLMs are not especially if their training corpus is large enough.
    

    I'm not sure this is true. The opposite may be true.

    Many people assume that LLMs are programmed by engineers (biased humans working at companies with vested interests) and that Wikipedia mods are saints.

    • I don't think anybody who has seen an edit war thinks wiki editors (not mods, mods have a different role) are saints.

      But a Wikipedia page cannot survive stating something completely outside the consensus. Bizarre statements cannot survive because they require reputable references to back them.

      There's bias in Wikipedia, of course, but it's the kind of bias already present in the society that created it.

      7 replies →

Isn't the difference here that to poison wikipedia you have to do it quite agressively vy directly altering the article which can easily be challenged whereas the training data poisoning can be done much more subversivly

Good thing wiki articles are publicly reviewed and discussed.

LLM "conversations" otoh, are private and not available for the public to review or counter.

Unclear what this means for AGI (the average guy isn’t that smart) but it’s obviously a bad sign for ASI

  • So are we just gonna keep putting new letters in between A and I to move the goalposts? When are we going to give up the fantasy that LLMs are "intelligent" at all?

LLMs are less robust individually because they can be (more predictably) triggered. Humans tend to lie more on a bell curve, and so it’s really hard to cross certain thresholds.

  • Classical conditioning experiments seem to show that humans (and other animals) are fairly easily triggered as well. Humans have a tendency to think themselves unique when we are not.

    • Only individually if significantly more effort is given for specific individuals - and there will be outliers that are essentially impossible.

      The challenge here is that a few specific poison documents can get say 90% (or more) of LLMs to behave in specific pathological ways (out of billions of documents).

      It’s nearly impossible to get 90% of humans to behave the same way on anything without massive amounts of specific training across the whole population - with ongoing specific reinforcement.

      Hell, even giving people large packets of cash and telling them to keep it, I’d be surprised if you could get 90% of them to actually do so - you’d have the ‘it’s a trap’ folks, the ‘god wouldn’t want me too’ folks, the ‘it’s a crime’ folks, etc.

      5 replies →

But is poisoning just fooling. Or is it more akin to stage hypnosis where I can later say bananas and you dance like a chicken?

  • My understanding is it’s more akin to stage hypnosis, where you say bananas and they tell you all their passwords

    … the articles example of a potential exploit is exfiltration of data.

I see this argument by analogy to human behavior everywhere, and it strikes me as circular reasoning. we do not know enough about either the human mind or LLMs to make comparisons like this

A single malicious scientific study can fool thousands or perhaps millions of real people as that fact gets repeated in different forms and amplified with nobody checking for a valid source. Llms are no more robust.

A single malicious infotainment outlet can fool thousands or perhaps millions of real people as that fact gets repeated in different forms and amplified with nobody checking for a valid source.

Llms are no more robust.