← Back to context

Comment by themaninthedark

5 months ago

"Beware of he who would deny you access to information for in his heart he dreams himself your master." - Commissioner Pravin Lal, U.N. Declaration of Rights

Full quote: "As the Americans learned so painfully in Earth's final century, free flow of information is the only safeguard against tyranny. The once-chained people whose leaders at last lose their grip on information flow will soon burst with freedom and vitality, but the free nation gradually constricting its grip on public discourse has begun its rapid slide into despotism. Beware of he who would deny you access to information, for in his heart he deems himself your master."

(Alpha Centauri, 1999, https://civilization.fandom.com/wiki/The_Planetary_Datalinks... )

  • I sit here in my cubicle, here on the motherworld. When I die, they will put my body in a box and dispose of it in the cold ground. And in the million ages to come, I will never breathe, or laugh, or twitch again. So won't you run and play with me here among the teeming mass of humanity? The universe has spared us this moment."

    ~Anonymous, Datalinks.

There is a difference between free flow of information and propaganda. Much like how monopolies can destroy free markets, unchecked propaganda can bury information by swamping it with a data monoculture.

I think you could make a reasonable argument that the algorithms that distort social media feeds actually impede the free flow of information.

  • > Much like how monopolies can destroy free markets, unchecked propaganda can bury information by swamping it with a data monoculture.

    The fundamental problem here is exactly that.

    We could have social media that no central entity controls, i.e. it works like the web and RSS instead of like Facebook. There are a billion feeds, every single account is a feed, but you subscribe to thousands of them at most. And then, most importantly, those feeds you subscribe to get sorted on the client.

    Which means there are no ads, because nobody really wants ads, and so their user agent doesn't show them any. And that's the source of the existing incentive for the monopolist in control of the feed to fill it with rage bait, which means that goes away.

    The cost is that you either need a P2P system that actually works or people who want to post a normal amount of stuff to social media need to pay $5 for hosting (compare this to what people currently pay for phone service). But maybe that's worth it.

    • >We could have social media that no central entity controls, i.e. it works like the web and RSS instead of like Facebook. There are a billion feeds, every single account is a feed, but you subscribe to thousands of them at most. And then, most importantly, those feeds you subscribe to get sorted on the client.

      The Fediverse[1] with ActivityPub[0]?

      [0] https://activitypub.rocks/

      [1] https://fediverse.party/

      4 replies →

  • There is no generally accepted definition of propaganda. One person's propaganda is another person's accurate information. I don't trust politicians or social media employees to make that distinction.

    • There is definitely videos that are propaganda.

      Like those low quality AI video about Trump or Biden, saying things that didn't happened. Anyone with critical thinking knows that those are either propaganda or engagement farming

      3 replies →

    • What you think is propaganda is irrelevant. When you let people unnaturally amplify information by paying to have it forced into someone’s feed that is distorting the free flow of information.

      Employees choose what you see every day you use most social media.

      8 replies →

    • And propaganda by definition isn’t false information. Propaganda can be factual as well.

    • So many people have just given up on the very idea of coherent reality? Of correspondence? Of grounding?

      Why? No one actually lives like that when you watch their behavior in the real world.

      It's not even post modernism, it's straight up nihilism masquerading as whatever is trendy to say online.

      These people accuse every one of bias while ignoring that there position comes from a place of such extreme biased it irrationally, presuppositionaly rejects the possibility of true facts in their chosen, arbitrary cut outs. It's special pleading as a lifestyle.

      It's very easy to observe, model, simulate, any node based computer networks that allow for coherent and well formed data with high correspondence, and very easy to see networks destroyed by noise and data drift.

      We have this empirically observed in real networks, it's pragmatic and why the internet and other complex systems run. People rely on real network systems and the observed facts of how they succeed or fail then try to undercut those hard won truths from a place of utter ignorance. While relying on them! It's absurd ideological parasitism, they deny the value of the things the demonstrably value just by posting! Just the silliest form of performative contradiction.

      I don't get it. Fact are facts. A thing can be objectively true in what for us is a linear global frame. The log is the log.

      Wikipedia and federated text content should never be banned, logs and timelines, data etc... but memes and other primarily emotive media is case by case, I don't see their value. I don't see the value in allowing people to present unprovable or demonstrably false data using a dogmatically, confidentally true narrative.

      I mean present whatever you want but mark it as interpretation or low confidence interval vs multiple verified sources with a paper trail.

      Data quality, grounding and correspondence can be measured. It takes time though for validation to occur, it's far easier to ignore those traits and just generate infinite untruth and ungrounded data.

      Why do people prop up infinite noise generation as if it was a virtue? As if noise and signal epistemically can't be distinguished ever? I always see these arguments online by people who don't live that way at all in any pragmatic sense. Whether it's flat earthers or any other group who rejects the possibility of grounded facts.

      Interpretation is different, but so is the intentional destruction of a shared meaning space by turning every little word into a shibboleth.

      People are intentionally destroying the ability to even negotiate connections to establish communication channels.

      Infinite noise leads to runaway network failure and in human systems the inevitably of violence. I for one don't like to see people die because the system has destroyed message passing via attentional ddos.

      1 reply →

    • There isn’t. Yet, everybody knows what I mean under “propaganda against immigration” (just somebody would discredit it, somebody would fight for it), and nobody claims that the Hungarian government’s “information campaign” about migrants is not fascist propaganda (except the government, obviously, but not even their followers deny it). So, yes, the edges are blurred, yet we can clearly identify some propaganda.

      Also accurate information (like here is 10 videos about black killing whites) with distorted statistics (there is twice as much white on black murder) is still propaganda. But these are difficult to identify, since they clearly affect almost the whole population. Not many people even tried to fight against it. Especially because the propaganda’s message is created by you. // The example is fiction - but the direction exists, just look on Kirk’s twitter for example -, I have no idea about the exact numbers off the top of my head

  • Propaganda wouldn't be such a problem if content wasn't dictated by a handful of corporations, and us people weren't so unbelievably gullible.

  • Oh, but can you make an argument that the government, pressuring megacorporations with information monopolies to ban things they deem misinformation, is a good thing and makes things better?

    Because that's the argument you need to be making here.

    • You don't even need to make the argument. Go copy paste some top HN comments on this issue from around the time the actions we're discussing youtube reversing happened.

      1 reply →

    • Not really. You can argue that the government should have the right to request content moderation from private platforms and that private platforms should have the right to decline those requests. There are countless good reasons for both sides of that.

      In fact, this is the reality we have always had, even under Biden. This stuff went to court. They found no evidence of threats against the platforms, the platforms didn't claim they were threatened, and no platform said anything other than they maintained independent discretion for their decisions. Even Twitter's lawyers testified under oath that the government never coerced action from them.

      Even in the actual letter from YouTube, they affirm again that they made their decisions independently: "While the Company continued to develop and enforce its policies independently, Biden Administration officials continued to press the company to remove non-violative user-generated content."

      So where does "to press" land on the spectrum between requesting action and coercion? Well, one key variable would be the presence of some type of threat. Not a single platform has argued they were threatened either implicitly or explicitly. Courts haven't found evidence of threats. Many requests were declined and none produced any sort of retaliation.

      Here's a threat the government might use to coerce a platform's behavior: a constant stream of subpoenas! Well, wouldn't you know it, that's exactly what produced the memo FTA.[1]

      Why hasn't Jim Jordan just released the evidence of Google being coerced into these decisions? He has dozens if not hundreds of hours of filmed testimony from decision-makers at these companies he refuses to release. Presumably because, like in every other case that has actually gone to court, the evidence doesn't exist!

      [1] https://www.politico.com/live-updates/2025/03/06/congress/ji...

      4 replies →

That sounds great in the context of a game, but in the years since its release, we have also learned that those who style themselves as champions of free speech also dream themselves our master.

They are usually even more brazen in their ambitions than the censors, but somehow get a free pass because, hey, he's just fighting for the oppressed.

  • I'd say free speech absolutism (read: early-pandemic Zuckerberg, not thumb-on-the-scales Musk) has always aged better than the alternatives.

    The trick is there's a fine line between honest free speech absolutism and 'pro free speech I believe in and silence about the freedom of that I don't.' Usually when ego and power get involved (see: Trump, Musk).

    To which, props to folks like Ted Cruz on vocally addressing the dissonance of and opposing FCC speech policing.

  • Anything that people uncritically good attracts the evil and the illegitimate because they cannot build power on their own so they must co-opt things people see as good.

Not in the original statement, but as it referenced here, the word 'information' is doing absolutely ludicrous amounts of lifting. Hopefully it bent at the knees, because it my book it broke.

You can't call the phrase "the sky is mint chocolate chip pink with pulsate alien clouds" information.

  • While this is true, It's also important to realize that during the great disinformation hysteria, perfectly reasonable statements like "This may have originated from a lab", "These vaccines are non-sterilizing", or "There were some anomalies of Benford's Law in this specific precinct and here's the data" were lumped into the exact same bucket as "The CCP built this virus to kill us all", "The vaccine will give you blood clots and myocarditis", or "The DNC rigged the election".

    The "disinformation" bucket was overly large.

    There was no nuance. No critical analysis of actual statements made. If it smelled even slightly off-script, it was branded and filed.

    • But it is because of the deluge that that happens. We can only process so much information. If the amount of "content" coming through is orders of magnitude larger, it makes sense to just reject everything that looks even slightly like nonsense, because there will still be more than enough left over.

      4 replies →

  • You can call it data and have sufficient respect of others that they may process it into information. Too many have too little faith in others. If anything we need to be deluged in data and we will probably work it out ourselves eventually.

    • Facebook does its utmost to subject me to Tartarian, Flat Earth and Creationist content.

      Yes I block it routinely. No the algo doesnt let up.

      I dont need "faith" when I can see that a decent chunk of people disbelieve modern history, and aggressively disbelieve science.

      More data doesnt help.

This is a fear of an earlier time.

We are not controlling people by reducing information.

We are controlling people by overwhelming them in it.

And when we think of a solution, our natural inclination to “do the opposite” smacks straight into our instinct against controlling or reducing access to information.

The closest I have come to any form of light at the end of the tunnel is Taiwan’s efforts to create digital consultations for policy, and the idea that facts may not compete on short time horizon, but they surely win on longer time horizons.

  • The problem is that in our collective hurry to build and support social networks, we never stopped to think about what other functions might be needed with them to promote good, factual society.

    People should be able to say whatever the hell they want, wherever the hell they want, whenever the hell they want. (Subject only to the imminent danger test)

    But! We should also be funding robust journalism to exist in parallel with that.

    Can you imagine how different today would look if the US had leveraged a 5% tax on social media platforms above a certain size, with the proceeds used to fund journalism?

    That was a thing we could have done. We didn't. And now we're here.

Beware of those who quote videogames and yet attribute them to "U.N. Declaration of Rights".

  • They're not wrong; the attribution is part of the quote. In-game, the source of the quote is usually important, and is always read aloud (unlike in Civ).

    • I would argue that they are, if not wrong, at least misleading.

      If you've never played Alpha Centauri (like me) you are guaranteed to believe this to be a real quote by a UN diplomat. It also doesn't help that searching for "U.N. Declaration of Rights" takes me (wrongly) to the (real) Universal Declaration of Human Rights. I only noticed after reading ethbr1's comment [1], and I bet I'm not the only one.

      [1] https://news.ycombinator.com/item?id=45355441

      1 reply →

Beware he who would tell you that any effort at trying to clean up the post apocalyptic wasteland that is social media is automatically tyranny, for in his heart he is a pedophile murderer fraudster, and you can call him that without proof, and when the moderators say your unfounded claim shouldn't be on the platform you just say CENSORSHIP.

The thing is that burying information in a firehose of nonsense is just another way of denying access to it. A great way to hide a sharp needle is to dump a bunch of blunt ones on top of it.

Sure, great. Now suppose that a very effective campaign of social destabilisation propaganda exists that poses an existential risk to your society.

What do you do?

It's easy to rely on absolutes and pithy quotes that don't solve any actual problems. What would you, specifically, with all your wisdom do?

  • Let's not waste time on idle hypotheticals and fear mongering. No propaganda campaign has ever posed an existential threat to the USA. Let us know when one arrives.

    • Have you seen the US recently? Just in the last couple of days, the president is standing up and broadcasting clear medical lies about autism, while a large chunk of the media goes along with him.

      10 replies →

    • It doesn't have to be national threat. Social media can be used by small organisations or even sufficiently motivated individuals to easily spread lies and slanders against individuals or group and it's close to impossible to prevent (I've been fighting some trolls threatening a group of friends on Facebook lately, and I can attest how much the algorithm favor hate speach over reason)

      2 replies →

  • There are twin goals: total freedom of speech and holding society together (limit polarization). I would say you need non-anonymous speech, reputation systems, trace-able moderation (who did you upvote), etc. You can say whatever you want but be ready to stand by it.

    One could say the problem with freedom of speech was that there weren't enough "consequences" for antisocial behavior. The malicious actors stirred the pot with lies, the gullible and angry encouraged the hyperbole, and the whole US became polarized and divided.

    And yes, this system chills speech as one would be reluctant to voice extreme opinions. But you would still have the freedom to say it but the additional controls exert a pull back to the average.

Is your point that any message is information?

Without truth there is no information.

That seems to be exactly her point, no?

Imagine an interface that reveals the engagement mechanism by, say, having an additional iframe. In this iframe an LLM clicks through its own set of recommendations picked to minimize negative emotions at the expense of engagement.

After a few days you're clearly going to notice the LLM spending less time than you clicking on and consuming content. At the same time, you'll also notice its choices are part of what seems to you a more pleasurable experience than you're having in your own iframe.

Social media companies deny you the ability to inspect, understand, and remix how their recommendation algos work. They deny you the ability to remix an interface that does what I describe.

In short, your quote surely applies to social media companies, but I don't know if this is what you originally meant.

Raising the noise floor of disinformation to drown out information is a way of denying access to information too..

Facebook speaks through what it chooses to promote or suppress and they are not liable for that speech because of Section 230.

  • Not quite: prior to the communications Decency Act of 1996 (which contained section 230), companies were also not liable for the speech of their users, but lost that protection if they engaged in any moderation. The two important cases at hand are Stratton Oakmont, Inc. v. Prodigy Services Co. And Cubby, Inc. v. CompuServe Inc.

    The former moderated content and was thus held liable for posted content. The latter did not moderate content and was determined not to be liable for user generated content they hosted.

    Part of the motivation of section 230 was to encourage sites to engage in more moderation. If section 230 were to be removed, web platforms would probably choose to go the route of not moderating content in order to avoid liability. Removing section 230 is a great move if one wants misinformation and hateful speech to run unchecked.