← Back to context

Comment by andrewmutz

3 days ago

This is a good blog post. Two thoughts about it:

- Contradictory facts often shouldn't change beliefs because it is extremely rare for a single fact in isolation to undermine a belief. If you believe in climate change and encounter a situation where a group of scientists were proven to have falsified data in a paper on climate change, it really isn't enough information to change your belief in climate change, because the evidence of climate change is much larger than any single paper. It's only really after reviewing a lot of facts on both sides of an issue that you can really know enough to change your belief about something.

- The facts we're exposed to today are often extremely unrepresentative of the larger body of relevant facts. Say what you want about the previous era of corporate controlled news media, at least the journalists in that era tried to present the relevant facts to the viewer. The facts you are exposed to today are usually decided by an algorithm that is trying to optimize for engagement. And the people creating the content ("facts") that you see are usually extremely motivated/biased participants. There is zero effort by the algorithms or the content creators to present a reasonably representative set of facts on both sides of an issue

I remember reading an article on one of the classic rationalist blogs (but they write SO MUCH I can't possibly find it) describing something like "rational epistemic skepticism" – or maybe a better term I can't recall either. (As noted below: "Epistemic learned helplessness")

The basic idea: an average person can easily be intellectually overwhelmed by a clever person (maybe the person is smarter, or more educated, or maybe they just studied up on a subject a lot). They basically know this... and also know that it's not because the clever person is always right. Because there's lots of these people, and not every clever person thinks the same thing, so they obviously can't all be right. But the average person (average with respect to whatever subject) is still rational and isn't going to let their beliefs bounce around. So they develop a defensive stance, a resistance to being convinced. And it's right that they do!

If someone confronts you with the PERFECT ARGUMENT, is it because the argument is true and revelatory? Or does it involve some slight of hand? The latter is much more likely

  • I tend to like the ethos/logos/pathos model. Arguments from clever people can sound convincing because ethos gets mixed in. And anyone can temporarily confuse someone by using pathos. This is why it's better to have arguments externalized in a form that can be reviewed on their own, logos only. It's the only style that can stand on its own without that ephemeral effect (aside from facts changing), and it's also the only one that can be adopted and owned by any listener that reviews it and proves it true to themselves.

  • It's usually dumb people that have so many facts and different arguments that one can't keep up with.

    And they usually have so many of those because they were convinced to pay disproportionate attention to it and don't see the need to check anything or reject bad sources.

    • I noticed something similar. People who believe in absolute garbage tend to be the ones that don't have robust bs filter that would let them quickly reject absolute garbage. And it's surprisingly orthogonal to person's intelligence. There's correlation but even very intelligent people can have very weak bs filter and their intelligence post-rationalizes the absolute garbage they were unable to reject.

      3 replies →

  • The problem isn't the PERFECT ARGUMENT, it's the argument that doesn't look like an argument at all.

    Take anti-vaxxers. If you try to argue with the science, you've already lost, because anti-vaxxers have been propagandised into believing they're protecting their kids.

    How? By being told that vaccinations are promoted by people who are trying to harm their kids and exploit the public for cash.

    And who tells them? People like them. Not scientists. Not those smart people who look down on you for being stupid.

    No, it's influencers who are just like them, part of the same tribe. Someone you could socialise with. Someone like you.

    Someone who only has your best interests at heart.

    And that's how it works. That's why the anti-vax and climate denial campaigns run huge bot farms with vast social media holdings which insert, amplify, and reinforce the "These people are evil and not like us and want to make you poor and harm your kids" messaging, combined with "But believe this and you will keep your kids safe".

    Far-right messaging doesn't argue rationally at all. It's deliberate and cynically calculated to trigger fear, disgust, outrage, and protectiveness.

    Consider how many far-right hot button topics centre on protecting kids from "weird, different, not like us" people - foreigners, intellectuals, scientists, unorthodox creatives and entertainers, people with unusual sexualities, outgroup politicians. And so on.

    So when someone tries to argue with it rationally, they get nowhere. The "argument" is over before it starts.

    It's not even about rhetoric or cleverness - both of which are overrated. It's about emotional conditioning using emotional triggers, tribal framing, and simple moral narratives, embedded with constant repetition and aggressive reinforcement.

    • I liked your point about tribalism up until you said one tribe is rational and the other not. The distribution of rational behavior does not change much tribe to tribe, it's the values that change. As soon as you say one tribe is more rational than another you're just feeding into more tribalism by insulting a whole group's intelligence.

      I think the real problem is that zero friction global communication and social media has dramatically decreased the incentive to be thoughtful about anything. The winning strategy for anyone in the public eye is just to use narratives that resonate with people's existing worldview, because there is so much information out there and our civilization has become so complex that it's overwhelming to think about anything from first principles. Combine that with the dilution of local power as more and more things have gone online and global, a lot of the incentives for people to be truthful and have integrity are gone or at least dramatically diminished compared to the entirety of human history prior to the internet.

      4 replies →

    • I really think most of these statements apply to both political sides of messaging in a majority of cases. You can't talk about in-group out-group unless you draw a line somewhere, and in your comment you drew a line between people who represent science and rationality and those that are fearful and reactionary, which you'd believe to be a sensible place to draw that line if you habitually consume basically any media. The actual science seems mostly incidental to any kind of conversation about it.

      Some people are crippled by anxiety and fear of the unknown or fear of their neighbors. It's sad, but it's not unique to political alignment.

      4 replies →

    • I’d like to mildly point out that this style of caricaturing ideologies is one of the most effective at entrenching those same ideologies. If you can recognize that those critiquing you are doing so in bad faith, not only does it make the critique easy to dismiss, it provides evidence for the prior that all critiques are in bad faith and can be safely ignored.

      1 reply →

    • It's also mentioned in "the authoritarians" (search for the book and the short-form essay) - roughly half the population is driven by intellectual curiosity about all kinds of things and don't always agree on much - they just want freedom to be individuals.

      The other half is driven by fear, disgust, paranoia, etc.. That second group is much easier to trigger / convince - just play on their fears about their kids, their friends, their church ("will ban Bibles and churches"), etc.. (I was raised in this kind of environment).

      Authoritarians WANT a "strong leader" to tell them what to think, how to act, etc. That's how they show they belong to the tribe: they believe everything that is said, they give the most $$ to their church, etc.

      2 replies →

    • > Take anti-vaxxers. If you try to argue with the science, you've already lost, because anti-vaxxers have been propagandised into believing they're protecting their kids

      What do you think causes vaccine injury?

      Do you believe in these zoonotic origin theory of Covid, rather than the Wuhan coronavirus Institute accidentally releasing a coronavirus in Wuhan? Why do you think that is?

      Why do you think vaccine manufacturers asked governments for blanket immunity from prosecution?

      Why does the United States require children to get so many more vaccines than other developed western countries?

      Do you think you are assuming which side is rational?

      1 reply →

    • Ah yes. People who think like you and agree with you are rational, not prone to fear, disgust outrage, or protectiveness. But people who disagree with you are obviously irrational and can't be reasoned with. You are "educated" and they are "fear-mongers".

      9 replies →

    • Just to add a little to the discussion, I suspect that the "not like us" messaging is mostly a right-wing thing, while there's more of a "don't contaminate my fluids" argument from the far-left.

      Neither is a rational argument, and still trigger the same disgust and fear, but tend to have different implications for outgroups.

      2 replies →

  • repetition breeds rationalism. variety of phrasing breeds facts.

    it's how the brain works. the more cognitive and perceptive angles agree on the observed, the more likely it is, that the observed is really / actually observed.

    polysemous language (ambiguity) makes it easy to manipulate the observed. reinterpretation, mere exposure and thus coopted, portfolio communist media and journalism, optimize, while using AI for everything will make it as efficient as it gets.

    keep adding new real angles and they'll start to sweat or throw towels and tantrums and aim for the weak.

To add to your second point, those algorithms are extremely easy to game by states with the resources and desire to craft narratives. Specifically Russia and China.

There has actually been a pretty monumental shift in Russian election meddling tactics in the last 8 years. Previously we had the troll army, in which the primary operating tactic of their bot farms were to pose as Americans (as well as Poles, Czechs, Moldovans, Ukrainians, Brits, etc.) but push Russian propaganda. Those bot farms were fairly easy to spot and ban, and there was a ton of focus on it after the 2016 election, so that strategy was short lived.

Since then, Russia has shifted a lot closer to Chinese style tactics, and now have a "goblin" army (contrasted with their troll army). This group no longer pushes the narratives themselves, but rather uses seemingly mindless engagement interactions like scrolling, upvoting, clicking on comments, replying to comments with LLMs, etc., in order to game what the social media algorithms show people. They merely push the narratives of actual Americans (not easily bannable bots) who happen to push views that are either in line with Russian propaganda, or rhetoric that Russian intelligence views as being harmful to the US. These techniques work spectacularly well for two reasons: the dopamine boost to users who say abominable shit as a way of encouraging them to do more, and as a morale-killer to people who might oppose such abominable shit but see how "popular" it is.

https://www.bruegel.org/first-glance/russian-internet-outage...

  • > These techniques work spectacularly well for two reasons

    Do they work spectacularly well, though? E.g. the article you link shows that Twitter accounts holding anti-Ukrainian views received 49 reposts less on average during a 2-hour internet outage in Russia. Even granting that all those reposts were part of an organized campaign (its hardly surprising that people reposting anti-Ukrainian content are primarily to be found in Russia) and that 49 reposts massively boosted the visibility of this content, its effect is still upper bounded by the effect of propaganda exposure on people's opinions, which is generally low. https://www.persuasion.community/p/propaganda-almost-never-w...

    • Notice that the two reasons I mentioned don't hinge on changing anyones mind.

      1 - They boost dopamine reward systems in people who get "social" validation of their opinions/persona as an influencer. This isn't something specific to propaganda...this is a well-observed phenomenon of social media behavior. This not only gives false validation to the person spreading the misinformation/opinions, but it influences other people who desire that sort of influence by giving them an example of something successful to replicate.

      2 - In aggregate, it demoralizes those who disagree with the opinions by demonstrating a false popularity. Imagine, for example, going to the comments of an instagram post of something and you see a blatant neo-nazi holocaust denial comment with 50,000 upvotes. It hasn't changed your mind, but it absolutely will demoralize you from thinking you have any sort of democratic power to overcome it.

      No opinions have changed, but more people are willing to do things that are destructive to social discourse, and fewer people are willing to exercise democratic methods to curb it.

      3 replies →

  • > a "goblin" army

    Hah, a "monkey amplifier" army! Look at garbage coming out of infinite monkeys keyboards and boost what fits. Sigh

  • What should make us believe any other state propaganda is better, even for its own general population?

The best way to lie is not presenting false facts, it's curating facts to suit your narrative. It's also often that you accidentally lie to yourself or others in this way. See a great many news stories.

  • The act of curating facts itself is required to communicate anything because there are an infinite number of facts. You have to include some and exclude others, and you arrange them in a hierarchy of value that matches your sensibilities. This is necessary in order to perceive the world at all, because there are too many facts and most of them need to be filtered. Everyone does this by necessity. Your entire perceptual system and senses are undergirded by this framework.

    There is no such thing as "objective" because it would include all things, which means it could not be perceived by anyone.

    • The subjective/objective split is useful. What good is raising the bar for objectivity such that it can never be achieved? Better to have objective just mean that nobody in the current audience cares to suggest contradictory evidence.

      It's for indicating what's in scope for debate, and what's settled. No need to invoke "Truth". Being too stringent about objectivity means that everything is always in scope for debate, which is a terrible place to be if you want to get anything done.

  • I often put it this way: you can lie with the truth. I feel like most people don't get this.

  • Another very good way to lie is to set up the framing such that any interpretation of any fact skews in your desired direction. Including which things are to be considered important/relevant, what kind of argument is considered valid/not. Done well, people might not even pick up that there is lying/misdirection involved. Rig the game.

The idea that people believe in climate change (or evolution) is odd considering people don't say they believe in General Relativity or atomic theory of chemistry. They just accept those as the best explanations for the evidence we have. But because climate change and evolution run counter to some people's values (often religious but also financially motivated), they get called beliefs.

  • You generally don't oppose to things you can grasp to the point you could understand how it challenges other beliefs you culturally or intuitively integrated.

    Evolution directly challenges the idea that humans are very special creatures in a universe where mighty mystic forces care about them a lot.

    Climate changes, and the weight of human industry in it, challenges directly the life style expectations of the wealthiests.

    • To some extent, physics/chemistry/etc. challenge the notion that free will exists, but that challenge is far enough removed and rarely touched upon that people who believe in free will don't feel that modern science is attacking that belief, and the scientists working on it generally see free will or any mechanisms of the brain as far too complex when they are studying things on the order for a few particles or few molecules.

      Some of neurology/psychology gets a bit closer, but science of the brain doesn't have major theories that are taught on the same level nor have much impact on public policy. The closest I can think of is how much public awareness of what constitutes a mental disorder lags behind science, but that area is still constantly contested even among the researchers themselves and thus prevents a unified message being given to the public that they must then respond to (choosing to believe the science or not).

  • > But because climate change and evolution run counter to some people's values (often religious but also financially motivated), they get called beliefs

    Hey, weren't we just talking about propaganda?

Thanks for your thoughts, they perfectly extend mine. I agree that it would be a sign of a very fragile belief system if it gets unwound by a single bit of contradictory evidence. And as to the "facts" that we're getting 24/7 coming out of every microwave is just a sign of complete decoupling of people's beliefs from empirical reality, in my humble opinion. Supply and demand and all that.

  • I would contend that empiricism is inadequate to discern what is real and what is true. Much of human experience and what is meaningful to being a person is not measurable nor quantifiable.

> the previous era of corporate controlled news media... The facts you are exposed to today are usually decided by an algorithm

... But that algorithm is still corporate controlled.

> Say what you want about the previous era of corporate controlled news media, at least the journalists in that era tried to present the relevant facts to the viewer.

If you think this reduced bias, you couldn't be more wrong - it only made the bias harder to debunk. Deciding which facts are "relevant" is one easy way to bias reporting, but the much easier, much more effective way is deciding which stories are "relevant". Journalists have their own convictions and causes, motivating which incidents they cast as isolated and random, and get buried in the news, and which are part of a wider trend, a "conversation that we as a nation must have", etc., getting front-page treatment.

A typical example: And third, the failure of its findings to attract much notice, at least so far, suggests that scholars, medical institutions and members of the media are applying double standards to such studies. - https://www.economist.com/united-states/2024/10/27/the-data-... (unpaywalled: https://archive.md/Mwjb4)

> If you believe in climate change and encounter a situation where a group of scientists were proven to have falsified data in a paper on climate change, it really isn't enough information to change your belief in climate change, because the evidence of climate change is much larger than any single paper.

Although your wider point is sound that specific example should undermine your belief quite significantly if you're a rational person.

1. It's a group of scientists and their work was reviewed, so they are probably all dishonest.

2. They did it because they expected it to work.

3. If they expected it to work it's likely that they did it before and got away with it, or saw others getting away with it, or both.

4. If there's a culture of people falsifying data and getting away with it, that means there's very likely to be more than one paper with falsified data. Possibly many such papers. After all, the authors have probably authored papers previously and those are all now in doubt too, even if fraud can't be trivially proven in every case.

5. Scientists often take data found in papers at face value. That's why so many claims are only found to not replicate years or decades after they were published. Scientists also build on each other's data. Therefore, there are likely to not only be undetected fraudulent papers, but also many papers that aren't directly fraudulent but build on them without the problem being detected.

6. Therefore, it's likely the evidence base is not as robust as previously believed.

7. Therefore, your belief in the likelihood of their claims being true should be lowered.

In reality how much you should update your belief will depend on things like how the fraud was discovered, whether there were any penalties, and whether the scientists showed contrition. If the fraud was discovered by people outside of the field, nothing happened to the miscreants and the scientists didn't care that they got caught, the amount you should update your belief should be much larger than if they were swiftly detected by robust systems, punished severely and showed genuine regret afterwards.

  • You're making a chain of assumptions and deductions that are not necessarily true given the initial statement of the scenario. Just because you think those things logically follow doesn't mean that they do.

    You also make throw away assertions line "That's why so many claims are only found to not replicate years or decades after they were published." What is "so many claims?" The majority? 10%? 0.5%?

    I totally agree with you that the nuances of the situation are very important to consider, and the things you mention are possibilities, but you are too eager to reject things if you think "that specific example should undermine your belief quite significantly if you're a rational person." You made lots of assumptions in these statements and I think a rational person with humility would not make those assumptions so quickly.

    • > What is "so many claims?" The majority? 10%? 0.5%?

      Wikipedia has a good intro to the topic. Some quick stats: "only 36% of the replications [in psychology] yielded significant findings", "Overall, 50% of the 28 findings failed to replicate despite massive sample sizes", "only 11% of 53 pre-clinical cancer studies had replications that could confirm conclusions from the original studies", "A survey of cancer researchers found that half of them had been unable to reproduce a published result".

      The example is hypothetical and each step is probabilistic, so we can't say anything is necessarily true. But which parts of the reasoning do you think are wrong?

      2 replies →

  • > It's a group of scientists and their work was reviewed, so they are probably all dishonest.

    Peer review is a very basic check, more or less asking someone else in the field "Does this paper, as presented, make any sense?". It's often overvalued by people outside the field, but it's table stakes to the scientific conversation, not a seal of approval by the field as a whole.

    >Scientists often take data found in papers at face value. That's why so many claims are only found to not replicate years or decades after they were published. Scientists also build on each other's data. Therefore, there are likely to not only be undetected fraudulent papers, but also many papers that aren't directly fraudulent but build on them without the problem being detected.

    I think it's rare that scientists take things completely at face value. Even without fraud, it's easy for people to make mistakes and it's rare that everyone in a field actually agrees on all the details, so if someone is relying on a paper for something, they will generally examine things quite closely, talk to the original authors, and to whatever extent practical attempt to verify it themselves. The publishing process doesn't tend to reward this behavior, though, unfortunately (And also as a result, an external observer does not generally see the results of this: if someone concludes that a result is BS as a result of this process, they're much more likely to drop it than try to publish a rebuttal, unless it's something that is particularly important)

    • Sorry, what I meant was that the authors on a paper are supposed to be reviewing each other's contributions. They should all have access to the same data and understand what's going on. In practice, that doesn't always happen of course. But it should. Peer review where a journal just asks someone to read the final result is indeed a much weaker form of check.

      There's way too many cases of bogus papers being cited hundreds or thousands of times for me to believe scientists check papers they are building on. It probably depends on a lot on the field, though, this stuff always does.

See also: the Chinese robber fallacy.

Even if only 0.1% of Chinese people engaged in theft, and that would be a much lower rate than in any developed country, you'd still get a million Chinese thieves. You could show a new one every day, bombarding people with images and news reports of how untrustworthy Chinese people are. The news reports themselves wouldn't even be misinformation, as all the people shown would actually be guilty of the crimes they were accused of. Nevertheless, people would draw the wrong conclusion.

Many people are curious about truth. But because of gaslighting and no single source of truth and too much noise level, people have checked out completely. People know something is fishy, they know barbarians are at the gate. But they also know that the gate is 10,000 km away so they think, "Let me live my life peacefully in the meantime." They have lost hope in the system.