Comment by NietzscheanNull
16 hours ago
How can you be certain that the ChatGPT "research" you cite is a faithful representation of facts? How do you know that OpenAI/Anthropic/Google haven't introduced RLHF to subtly steer model output on specific topics to align with their political/economic interests?
I'm seeing increasing numbers of people credulously citing ChatGPT/Claude/Gemini output as ground-truth fact. Many more are increasingly lulled into a false sense of security by the citations models append (to the point of neglecting even a bare-minimum skim of the cited sources, much less critically evaluating/contextualizing the nature of the sources themselves). My fear is that most people are blissfully ignorant at the new paradigms of propaganda that AI could enable; most of us here wouldn't be taken by the "slop" image-gen deepfakes (right now), but can you say the same about a couple of citations taken out of context?
We already know how trivial it is to win over a sizeable chunk of society by introducing red-herrings, misrepresenting statistical data, etc. -- oil companies perfected that art, and now as a result a huge number of voters in the US believe that climate change (doesn't exist|isn't man-made|is unavoidable). And that effort was "fully manual" and carried out without the aid of extensive psychological profiling at the individual level via an ad-surveillance complex. Today, society is almost completely defenseless against the extreme granularity/subtlety of manipulation that ownership of frontier AI models enables, especially when it's armed with even a fraction of the torrent of personal data that's being collected on each of us every day.
No comments yet
Contribute on Hacker News ↗