Comment by woeirua
5 years ago
If there's anything that gives me hope that we can avoid a dystopian future driven by social media, it's that Deep-learning / AI is being used to cheaply create realistic forgeries of just about everything: profile pictures, text, profiles, voice recordings, etc.
Within the next 10 years, and maybe much sooner, the vast majority of content on FB/Twitter/Reddit/LinkedIn will be completely fake. The "people" on those networks will be fake as well. Sure there are bots today, but they're not nearly as good as what I'm talking about, and they don't exist at the same scale. Once that happens, the value of those networks will rapidly deteriorate as people will seek out more authentic experiences with real people.
IMO, there's a multibillion dollar company waiting to be founded to provide authenticity verification services for humans online.
My family grew up behind the iron curtain. At a family event once I heard someone tell a story that I think has been the most accurate prediction of the last few years (if anyone knows the actual interview event, please tell me more so I can get the exact wording, this is all paraphrasing from childhood memories).
A western reporter travelled to the other side of the iron curtain once and was doing what he thought would be an easy west-is-great gotcha-style interview. He asked someone over there, "How do you even know what's going on in your country if your media is so tightly controlled?" Think Chernobyl-levels of tight-lipped ministry-of-information-approved newspapers.
The easterner replied, "Oh, we're better informed than you guys. You see, the difference is we know what we're reading is all propaganda, so we try to piece together the truth from all the sources and from what isn't said. You in the west don't realize you're reading propaganda."
I've been thinking about this more and more the last few years seeing how media bubbles have polarized, fragmented, and destabilized everyone and everything. God help us when cheap ubiquitous deepfakes industrialize the dissemination of perfectly-tailored engineered narratives.
I’ve heard this story too when growing up. I belong to one of the last generations born in the German Democratic Republic. A quite prominent element of our History and German lessons in the 2000s was critical reading of historic news and caricatures, we did these analyses in exams up to A-levels. Propaganda was a big topic, not only when learning about the Third Reich. One reason certainly was that all our teachers spent most of their lives in the GDR system.
I’ve been wondering whether teachers who grew up on the other side of the curtain put a similar emphasis on the topic of propaganda, especially after social media uncovered lots of gullibility in the general public and a for me very difficult-to-understand trust in anything as long as it is written down somewhere, often not even looking at the source. Political effects of eastern german brain drain aside, one important difference between people in the former western and eastern parts of Germany up until today is how much they trust media and institutions like the church.
I find this unpersuasive.
The level of control/conformity on canonical Western media was such that, for most topics of daily news, thinking about the bias of the reporter was not a first-order concern.
For some topics (let's say, hot-button US-vs-USSR things, or race issues in the US), the bias of the source was of course important, anywhere.
But for, say, reporting inflation, unemployment, or the wheat harvest, whether NBC news or the Washington Post was biased wasn't critical in the same way it would have been in the USSR.
Basically, my argument is that the difference in degree is still a worthwhile difference.
While a segment of HN commenters could go on for hours about U-3 or U-6 unemployment numbers, the politicization of such, there is no real difference with most media consumers. Truth largely settles along a binary choice of the mainstream alternatives. Within those strains, views are very self-congruent. Perhaps that’s coincidence, or there are only two real truths, but I’ll defer to PG’s writings on that.
The real difference is that those in the east were predisposed to be suspicious, whereas in the west that disposition or curiousity is not a thing.
1 reply →
Bias can be reflected in which stats are reported at all. There's also the framing of the numbers and the conclusions stated or implied.
Have you noticed the topics for which there's remarkable conformity between US and UK media compared with other western media? https://news.ycombinator.com/item?id=24364947
Ah but universal cynicism and nihilism is also a form of control. When the very idea of objective truth has been destroyed, this makes the job of authoritarians easier, not harder.
The point isn't to be a cynic and a nihlist, it's to become a skeptic and to be mentally trained to always read between the lines. "Critical thinking" as they said in grade school.
The cliche "if you're not paying for it, you're the product" is just the tech nerd's version of "if you don't know who the fish at the table is, you're the fish."
Folks behind the iron curtain got used to that mentality over a few decades in a time when information flowed slowly through newspapers, radio, and early TV... we're now being forced to reckon with these tricks over the course of a few years while moving at the speed of industrialized data collection, microtargeting, and engineered dopamine bursts that maximize engagement.
People living in the cold war era were at least mentally inoculated against these tricks -- in the US we've had no preparation for it. The ease with which we've turned against each other for the easy popcorn comfort of the conspiracy theory or outrage du jour is mind boggling.
3 replies →
Yes, which is why Russian propaganda is more concerned about muddying the waters than constructing any particular narrative.
3 replies →
> Ah but universal cynicism and nihilism is also a form of control. When the very idea of objective truth has been destroyed, this makes the job of authoritarians easier, not harder.
Universal cynicism and nihilism may function that way. But that was not the attitude of the person in the description. So I am not sure how that is relevant?
1 reply →
Remember me a joke, in USSR to know the truth you only need to put a NOT in front of an article of the Pravda, because are all false, in USA you can't because only half are false
It is sad that the wisdom from behind the iron curtain (where I grew up, too) is so fitting in the US (where I now live) today. I find that critical assessment of the media, resistance to propaganda and brainwashing detection skills acquired over there served me very well in the US.
I wish those skills were teachable without recreating the full environment...
> we try to piece together the truth from all the sources and from what isn't said
I'm skeptical that this can be done effectively
Dr. Linebarger[1] wrote first a textbook (for the US army) and then a book (for the general public) on "Psychological Warfare" which incidentally contains a section, with an outlined method complete with mnemonic acronym (STASM), on media analysis.
"If you agree with it, it's truth. If you don't agree, it's propaganda. Pretend that it is all propaganda. See what happens on your analysis reports."
Mad magazine used to run "reading between the lines" pieces.
[1] A while ago I learned The Game of Rat and Dragon is accurate insofar as felines not only have better reflexes than ours, they're among the best.
Ask anyone from China and they will tell you the exact same thing. They know their news is state sponsored and all propaganda. People in the united states are blissfully unaware.
We still have a robust ecosystem of quality journalism in the US. There is bias, there are mistakes made, and there is false information masquerading as news that can mislead media consumers if they are not careful. But we are still very far from the situation in China and Russia. To be clear there is a problem, and it's growing, but let's not exaggerate.
2 replies →
Somehow what you were saying reminded me of reading The Onion.
You know, where they have those opinion pieces always with the same 6 photos (but a different name & occupation) each spouting something humorous?
and curiously there is some truth at the hidden within each onion article.
Exactly, ask the same to anyone in Cuba or Venezuela.
> IMO, there's a multibillion dollar company waiting to be founded to provide authenticity verification services for humans online
On the flip side, successful startups that aren't full social but do require some authenticity verification have already been proven: nextdoor and blind, for example
I think the biggest issue is scaling to a facebook-style, reddit-style, or twitter-style "full-world" social network implies colliding people who have no other relationship or interaction but are linked through a topic or shared interest
And, in my opinion, when you hit a certain level of scale, the verification almost becomes pointless: there's enough loud angry and troll people out there that I dont think it matters if they're verified or not. You can't moderate away toxicity in discussions that include literally a million participants.
I think you need both verification and some way to keep all the users' subnetworks small enough that it isn't toxic or chilling. But then you lose that addictive feed of endless content that links people to reddit or Facebook or Instagram. Tough problem
> You can't moderate away toxicity in discussions that include literally a million participants.
In my opinion HN is the gold-standard of online communities and it's being managed pretty well despite it scaling to what it is right now.
I wonder more leanings from HN (specially on the moderation front) can be applied to newer social platforms.
The moderation here is very good, but I think cultural self-selection is a big factor too. Speaking broadly, it attracts technical, logical people who share values and standards around reasoned debate. I don't see that part scaling to society at large.
4 replies →
I don't even think toxicity is a problem for users without public persona. Those that are public have to play by the same rules that were already in place for classical PR.
We only got this problem with users trying to do house cleaning. Most communities are completely fine without authentication, so it certainly isn't necessary.
> But then you lose that addictive feed of endless content that links people to reddit or Facebook or Instagram. Tough problem
... Which is a good thing. (for the users, at least)
> do require some authenticity verification have already been proven
can add levels.fyi to that list as they now use actual offer letters to build their data set
You mention realistic forgeries, AI and huge volume as a possibility and that the outcome would be that people would be pushed into the real world but I'm not sure I see the connection.
If I can interact with bots that emulate humans with such a degree of realism, what do I care? You could be a bot, the whole of HN can be bots, I don't really care who wrote the text if I can get something from it, I mean I don't have any idea who you are and don't even read usernames when reading posts here on HN.
At its core this seems like a moderation issue, if someone writes bots that just post low quality nonsense, ban them, but if bots are just wrong or not super eloquent, I can point you to reddit and twitter right now and you can see a lot of those low quality nonsense, all posted by actual humans. In fact you can go outside and speak to real people and most of it is nonsense (me included).
The lines between the online world and the "real" world are always blurry. You might not care on HN, but you probably will care when you're trying to meet someone on a dating website and waste a bunch of time chatting with someone only to realize that they're a very convincing bot and that you've spent X hours that you could've been using to meet real people.
It seems like crowd-sourced moderation is probably the only thing that will work at scale. I've always wondered why Reddit doesn't rank comments by default according to someone's overall reputation inside of a subreddit and then by the relative merits of the comment on a particular subject. Getting the weighting right would be hard, but it seems like that would be the best way to dissuade low quality comments and outright trolling.
>At its core this seems like a moderation issue, if someone writes bots that just post low quality nonsense, ban them, but if bots are just wrong or not super eloquent, I can point you to reddit and twitter right now and you can see a lot of those low quality nonsense, all posted by actual humans. In fact you can go outside and speak to real people and most of it is nonsense (me included).
A relevant, if flip solution to the 'bot' issue[0].
[0]https://xkcd.com/810/
> IMO, there's a multibillion dollar company waiting to be founded to provide authenticity verification services for humans online.
Any kind of widely used identity/authentication system would need to be a protocol and not a product of a for-profit corporation. Businesses take on great risks if they use another corporation's products as part of their core operations as that product owner can change the terms of service at any time and pull the rug out from under them. A protocol is necessarily neutral so everyone can use it without risk in the same way they use HTTP.
For identity protocols I think BrightID (https://www.brightid.org/) is becoming more established and works pretty well.
See also Neal Stephenson's Fall: Dodge in Hell. What happens there though isn't authentic experiences but instead people buy tailored human/AI agent filters called editors to construct a reality for them by filtering out most media sources, including billboards and other interactive real-world advertisements and media screens. This way each individual has their own media reality.
> Once that happens, the value of those networks will rapidly deteriorate as people will seek out more authentic experiences with real people.
Will they? People interact with these things because they are giving the brain what it wants, not what it might need. How many people would flock to a verified minimal bias news site? How many people would embrace so many hard truths and throw off their comforting lies? How many people could even admit to themselves they were being lied to and had formed their identity around those lies?
Do people want authentic now? The evidence says no.
I don't know if the news is really the best example of this today. Clearly there will always be a subjective bias in reporting the news, but as deep fakes become more prevalent it will become increasingly important to know that the origin of a video clip is trustworthy.
That said, there are clearly some social networks where you absolutely want to verify authenticity. Take for example, dating websites. Fake profiles _TODAY_ are a huge problem for those sites. If you have too many fake profiles, then paying users just log off and never come back. Same for LinkedIn. How many recruiters are going to pay for access to that network if 30% of the profiles are fake?
That's just digital certificate-based government ID. You could maybe provide some layer of abstraction above it to improve the developer experience, but at the end of the day you're reliant on it existing. Everything else will be too easily forged (unless you're planning on doing in-person validation).
You'd have to do in-person validation.
But bots and spam and russian memes are already deeply engaging to people. I'm sure it will only get worse, though obviously some people will opt out.
>IMO, there's a multibillion dollar company waiting to be founded to provide authenticity verification services for humans online.
The US government does authentication in real life via social security numbers. Of course, they are not very secure: a government-operated SSO or auth API for third-party applications would be a logical next step.
It would guarantee uniqueness and authenticity of users. Even better, if this were an inter-governmental program, it would deter government meddling: a state issuing too many tokens for fake accounts would arouse suspicion.
>Once that happens, the value of those networks will rapidly deteriorate as people will seek out more authentic experiences with real people.
I think you have completely misread the situation. The "fakification" of social media is already happening. Much if not most engagement is already driven by bots or by fabricated "influencers" and more people are using these platforms more often, not less.
I agree that the system is already being heavily influenced by bots. I think that the public's perception of just by how much though does not match reality. As time goes on though, the lay public will come to the same realization that many of us have already arrived at: it's all fake.
I think the critical threshold for most people will be when bots start impersonating people they know in person. At that point, the value of the social networks will evaporate.
>As time goes on though, the lay public will come to the same realization that many of us have already arrived at: it's all fake.
I don't share your optimism. Significant portions of the population believe the Earth is 6000 years old or is flat. Not sure why their critical thinking skills would suddenly improve at an opportune time.
> Once that happens, the value of those networks will rapidly deteriorate as people will seek out more authentic experiences with real people.
Not so sure. I'd rather wage that people won't really care about whether they interact with real humans or not. Why would it matter? It's not rare for people to relate and feel emotions for virtual characters in video games - even though they are perfectly aware it's all fake! The same can be said for movies, TV shows. You know it's fake, yet you watch and enjoy. I'm not sure why it would be ANY different for social networks which are basically just another form of entertainment.
This is very interesting. So basically, we'll all use fake personas managed by AI. And nothing online will be real...
> IMO, there's a multibillion dollar company waiting to be founded to provide authenticity verification services for humans online.
Ironically accounts with Twitter's blue check mark are often the accounts most likely to be managed by a social media manager.
Blue check accounts are expensive enough that, if you get the account banned, you can't easily make a new one. Bot accounts don't have this problem. If I want to trick as many people as possible into drinking bleach, I probably want easily-burnable bot accounts, so that when someone calls me out on it, I can just make a new one and pick up where I left off.
Of course, this also assists in Social Cooling, since controversial statements act a lot like totally false ones in the public eye.
China already has that. At age 16, all citizens must get an ID card. Photo and biometric info are recorded. To get a cell phone, the ID card is required, and as of last year, it's cross-checked by a face recognition scan. Cell phone IDs are tied to citizen IDs. WeChat accounts are verified against phone IDs.
Now that's authenticity verification.
Not that different in the EU. Most member states keep track of EU citizens from birth with a citizen ID. To get a phone, you need to show said ID. There are states which keep biometrics in the ID and passports, such as face biometrics and fingerprints. Some EU states even sample DNA from the child at time of birth and keep in their records for future use.