Comment by kelseyfrog
4 days ago
> The whole game of Rationalism is that you should ignore gut intuitions and cultural norms that you can't justify with rational arguments.
Specifically, rationalism spends a lot of time about priors, but a sneaky thing happens that I call the 'double update'.
Bayesian updating works when you update your genuine prior believe with new evidence. No one disagrees with this, and sometimes it's easy and sometimes it's difficult to do.
What Rationalists often end up doing is relaxing their priors - intuition, personal experience, cultural norms - and then updating. They often think of this as one update, but what it is is two. The first update, relaxing priors, isn't associated with evidence. It's part of the community norms. There is an implicit belief that by relaxing one's priors you're more open to reality. The real result though, is that it sends people wildly off course. Care in point: all the cults.
Consider the pre-tipped scale. You suspect the scale reads a little low, so before weighing you tilt it slightly to "correct" for that bias. Then you pour in flour until the dial says you've hit the target weight. You’ve followed the numbers exactly, but because you started from a tipped scale, you've ended up with twice the flour the recipe called for.
Trying to correct for bias by relaxing priors is updating using evidence, not just because everyone is doing it.
> Consider the pre-tipped scale. You suspect the scale reads a little low, so before weighing you tilt it slightly to "correct" for that bias. Then you pour in flour until the dial says you've hit the target weight. You’ve followed the numbers exactly, but because you started from a tipped scale, you've ended up with twice the flour the recipe called for.
I'm not following this example at all. If you've zero'd out the scale by tilting, why would adding flour until it reads 1g lead to 2g of flour?
I agree. It's not the best metaphor.
I played around with various metaphors but most of them felt various degrees of worse. The idea of relaxing priors and then doing an evidence-based update while thinking it's genuinely a single update is a difficult thing to capture metaphorically.
Happy to hear better suggestions.
EDIT: Maybe something more like this:
Picture your belief as a shotgun aimed at the truth:
The correct move is one clean Bayesian shot.
Hold your aim where it is. Evidence arrives. Rotate and resize the spread in one simultaneous posterior jump determined by the actual likelihood ratio in front of you.
The stupid move? The move that Rationalists love to disguise as humility? It's to first relax your spread "to be open-minded," and then apply the update. You've just secretly told the math, "Give this evidence more weight than it deserves." And then you wonder why you keep overshooting, drifting into confident nonsense.
If you think your prior is overconfident, that is itself evidence. Evidence about your meta-level epistemic reliability. Feed it into the update properly. Do not amputate it ahead of time because "priors are bias." Bias is bad, yes, but closing your eyes and spinning around with shotgun in hand ie: double updating is not an effective method at removing bias.
Thanks, that's a fantastic description of a phenomenon I've observed but couldn't quite put my finger on.