← Back to context

Comment by dfabulich

4 days ago

The whole game of Rationalism is that you should ignore gut intuitions and cultural norms that you can't justify with rational arguments.

Well, it turns out that intuition and long-lived cultural norms often have rational justifications, but individuals may not know what they are, and norms/intuitions provide useful antibodies against narcissist would-be cult leaders.

Can you find the "rational" justification not to isolate yourself from non-Rationalists, not to live with them in a polycule, and not to take a bunch of psychedelic drugs with them? If you can't solve that puzzle, you're in danger of letting the group take advantage of you.

Yeah, I think this is exactly it. If something sounds extremely stupid, or if everyone around you says it's extremely stupid, it probably is. If you can't justify it, it's probably because you have failed to find the reason it's stupid, not because it's actually genius.

And the crazy thing is, none of that is fundamentally opposed to rationalism. You can be a rationalist who ascribes value to gut instinct and societal norms. Those are the product of millions of years of pre-training.

I have spent a fair bit of time thinking about the meaning of life. And my conclusions have been pretty crazy. But they sound insane, so until I figure out why they sound insane, I'm not acting on those conclusions. And I'm definitely not surrounding myself with people who take those conclusions seriously.

> The whole game of Rationalism is that you should ignore gut intuitions and cultural norms that you can't justify with rational arguments.

Specifically, rationalism spends a lot of time about priors, but a sneaky thing happens that I call the 'double update'.

Bayesian updating works when you update your genuine prior believe with new evidence. No one disagrees with this, and sometimes it's easy and sometimes it's difficult to do.

What Rationalists often end up doing is relaxing their priors - intuition, personal experience, cultural norms - and then updating. They often think of this as one update, but what it is is two. The first update, relaxing priors, isn't associated with evidence. It's part of the community norms. There is an implicit belief that by relaxing one's priors you're more open to reality. The real result though, is that it sends people wildly off course. Care in point: all the cults.

Consider the pre-tipped scale. You suspect the scale reads a little low, so before weighing you tilt it slightly to "correct" for that bias. Then you pour in flour until the dial says you've hit the target weight. You’ve followed the numbers exactly, but because you started from a tipped scale, you've ended up with twice the flour the recipe called for.

Trying to correct for bias by relaxing priors is updating using evidence, not just because everyone is doing it.

  • > Consider the pre-tipped scale. You suspect the scale reads a little low, so before weighing you tilt it slightly to "correct" for that bias. Then you pour in flour until the dial says you've hit the target weight. You’ve followed the numbers exactly, but because you started from a tipped scale, you've ended up with twice the flour the recipe called for.

    I'm not following this example at all. If you've zero'd out the scale by tilting, why would adding flour until it reads 1g lead to 2g of flour?

    • I agree. It's not the best metaphor.

      I played around with various metaphors but most of them felt various degrees of worse. The idea of relaxing priors and then doing an evidence-based update while thinking it's genuinely a single update is a difficult thing to capture metaphorically.

      Happy to hear better suggestions.

      EDIT: Maybe something more like this:

      Picture your belief as a shotgun aimed at the truth:

          Aim direction = your best current guess.
      
          Spread = your precision.
      
          Evidence = the pull that says "turn this much" and "widen/narrow this much."
      

      The correct move is one clean Bayesian shot.

      Hold your aim where it is. Evidence arrives. Rotate and resize the spread in one simultaneous posterior jump determined by the actual likelihood ratio in front of you.

      The stupid move? The move that Rationalists love to disguise as humility? It's to first relax your spread "to be open-minded," and then apply the update. You've just secretly told the math, "Give this evidence more weight than it deserves." And then you wonder why you keep overshooting, drifting into confident nonsense.

      If you think your prior is overconfident, that is itself evidence. Evidence about your meta-level epistemic reliability. Feed it into the update properly. Do not amputate it ahead of time because "priors are bias." Bias is bad, yes, but closing your eyes and spinning around with shotgun in hand ie: double updating is not an effective method at removing bias.

  • Thanks, that's a fantastic description of a phenomenon I've observed but couldn't quite put my finger on.

> The whole game of Rationalism is that you should ignore gut intuitions and cultural norms that you can't justify with rational arguments.

The game as it is _actually_ played is that you use rationalist arguments to justify your pre-existing gut intuitions and personal biases.

  • Exactly. Humans are rationalizers. Operate on pre-existing gut intuitions and biases then invent after the fact rational sounding justifications.

    I guess Pareto wasn't on the reading list for these intellectual frauds.

    Those are actually the priors being updated lol.

  • Which is to say, Rationalism is easily abused to justify any behavior contrary to its own tenets, just like any other -ism.

> The whole game of Rationalism is that you should ignore gut intuitions and cultural norms that you can't justify with rational arguments.

This is why it is so naive - gut intuitions and cultural norms pretty much dictate what does it mean for the argument to be rational.