Comment by partiallypro
2 years ago
This quote from the article is something I genuinely fear:
> "The rise of this type of repackaging is what makes it harder for us to find what we’re looking for online right now; the more that text generated by large-language models gets published on the Web, the more the Web becomes a blurrier version of itself."
I am fearful that eventually AI led misinformation is going to be so widespread that it will be impossible to reverse. Microsoft and Google HAVE to get a grip on that before it's a runaway problem. Things like having AI detection built into their traditional search engines that punish said generated content from reach the top, as well as from reaching their own models that degrade them into factories of complete garbage information/data is going to be incredibly important.
We already have a massive problem in determining what is real and what isn't with state actors, corporate speak, etc and now we'll be adding on AI language that could be even worse.
Agreed about the problem, not the solution. Detection won’t work, it’s way too noisy. We’re heading for bumpy times, soon you no longer need to be a govt to run a credible disinfo campaign. You can run one from your basement, (replacing beer brewing our sourdough making perhaps).
I can see your point on there being too much noise. I don't know a good solution, but feel we may be opening a big can of worms that we'll have to figure out especially in the next decade.