Comment by jrowen
4 days ago
This jives with my general reaction to the post, which was that the added complexity and difficulty of reasoning about the ranges actually made me feel less confident in the result of their example calculation. I liked the $50 result, you can tack on a plus or minus range but generally feel like you're about breakeven. On the other hand, "95% sure the real balance will fall into the -$60 to +$220 range" feels like it's creating a false sense of having more concrete information when you've really just added compounding uncertainties at every step (if we don't know that each one is definitely 95%, or the true min/max, we're just adding more guesses to be potentially wrong about). That's why I don't like the Drake equation, every step is just compounding wild-ass guesses, is it really producing a useful number?
It is producing a useful number. As more truly independent terms are added, error grows with the square root while the point estimation grows linearly. In the aggregate, the error makes up less of the point estimation.
This is the reason Fermi estimation works. You can test people on it, and almost universally they get more accurate with this method.
If you got less certain of the result in the example, that's probably a good thing. People are default overconfident with their estimated error bars.
Read a bit on Fermi estimation, I'm not quite sure exactly what the "method" is in contrast to a less accurate method, it's basically just getting people to think in terms of dimensional analysis? This passage from the Wikipedia is interesting:
By contrast, precise calculations can be extremely complex but with the expectation that the answer they produce is correct. The far larger number of factors and operations involved can obscure a very significant error, either in mathematical process or in the assumptions the equation is based on, but the result may still be assumed to be right because it has been derived from a precise formula that is expected to yield good results.
So the strength of it is in keeping it simple and not trying to get too fancy, with the understanding that it's just a ballpark/sanity check. I still feel like the Drake equation in particular has too many terms for which we don't have enough sample data to produce a reasonable guess. But I think this is generally understood and it's seen as more of a thought experiment.
> People are default overconfident with their estimated error bars.
You say this but yet roughly in a top level comment mentions people keep their error bars too close.
Sorry, my comment was phrased confusingly.
Being overconfident with error bars means placing them too close to the point estimation, i.e. the error bars are too narrow.
1 reply →
They are meaning the same thing. The original comment pointed out that people’s qualitative description and mental model of the 95% interval means they are overconfident… they think 95 means ‘pretty sure I’m right’ rather than ‘it would be surprising to be wrong’
I think the point is to create uncertainty, though, or to at least capture it. You mention tacking a plus/minus range to $50, but my suspicion is that people's expected plus/minus would be narrower than the actual - I think the primary value of the example is that it makes it clear there's a very real possibility of the outcome being negative, which I don't think most people would acknowledge when they got the initial positive result. The increased uncertainty and the decreased confidence in the result is a feature, not a bug.