Comment by Mentlo
5 days ago
Yes, difference being that LLM’s are information compressors that provide an illusion of wide distribution evaluation. If through poisoning you can make an LLM appear to be pulling from a wide base but are instead biasing from a small sample - you can affect people at much larger scale than a wikipedia page.
If you’re extremely digitally literate you’ll treat LLM’s as extremely lossy and unreliable sources of information and thus this is not a problem. Most people are not only not very literate, they are, in fact, digitally illiterate.
Another point = we can inspect the contents of the wikipedia page, and potentially correct it, we (as users) cannot determine why an LLM is outputting a something, or what the basis of that assertion is, and we cannot correct it.
You could even download a wikipedia article, do your changes to it and upload it to 250 githubs to strengthen your influence on the LLM.
This doesn't feel like a problem anymore now that the good ones all have web search tools.
Instead the problem is there's barely any good websites left.
The problem is that the good websites are constantly scraped/botted upon by these LLM's companies and they get trained upon and users ask LLM's and not go to their websites so they either close it or enshitten it
And also the fact that its easy to put slop on the internet more than ever so the amount of "bad" (as in bad quality) websites have gone up I suppose
1 reply →
[dead]
> Most people are not only not very literate, they are, in fact, digitally illiterate.
Hell look at how angry people very publicly get using Grok on Twitter when it spits out results they simply don’t like.
Unfortunately, the Gen AI hypesters are doing a lot to make it harder for people to attain literacy in this subdomain. People who are otherwise fairly digitally literate believe fantastical things about LLMs and it’s because they’re being force fed BS by those promoting these tools and the media outlets covering them.
s/digitally illiterate/illiterate/
Of course there are many illiterate people, but the interesting fact is that many, many literate, educated, intelligent people don't understand how tech works and don't even care, or feel they need to understand it more.
LLM reports misinformation --> Bug report --> Ablate.
Next pretrain iteration gets sanitized.
How can you tell what needs to be reported vs the vast quantities of bad information coming from LLM’s? Beyond that how exactly do you report it?
Who even says customers (or even humans) are reporting it? (Though they could be one dimension of a multi-pronged system.)
Internal audit teams, CI, other models. There are probably lots of systems and muscles we'll develop for this.
All LLM providers have a thumbs down button for this reason.
Although they don't necessarily look at any of the reports.
3 replies →
This is subject to political "cancelling" and questions around "who gets to decide the truth" like many other things.
> who gets to decide the truth
I agree, but to be clear we already live in a world like this, right?
Ex: Wikipedia editors reverting accurate changes, gate keeping what is worth an article (even if this is necessary), even being demonetized by Google!
1 reply →
Reporting doesn't scale that well compared to training and can get flooded with bogus submissions as well. It's hardly the solution. This is a very hard fundamental problem to how LLMs work at the core.
Nobody is that naive
nobody is that naive... to do what? to ablate/abliterate bad information from their LLMs?
6 replies →
we've been trained by youtube and probably other social media sites that downvoting does nothing. It's "the boy who cried" you can downvote.