Comment by applfanboysbgon
9 hours ago
> pretty big credibility hit after getting that election so wrong
That this is the narrative that survived the election is one of the greatest indictments for our society's ability to engage in critical thinking.
The day before the election, the Huffington Post published a hit piece criticising Nate for overrating Trump's odds and inspiring panic. Huffpost predicted a 98.2% chance of Clinton winning, NYT predicted 85%, and Nate's model dared to give her only a 65% chance of winning.
Then the election happens, Trump wins, and the credible figure who gave him the highest odds is now lambasted from the other direction. He was so wrong to give Trump a chance that the mainstream media were publishing articles about it, and he was so wrong to not give Trump a 100% chance that it ruined his reputation. The moral: you literally can't win, because people are too fucking stupid to comprehend probability, period.
538 made thousands of forecasts of events they predicted to happen 30% of the time, and those events happened 29% of the time in actuality. Does that mean they got it wrong every single one of those times a 30% event actually happened? For a forecaster to be 'correct', is it necessary for events forecasted at 30% to never happen?
Whenever people knock them for mis-calling a race, the thing I always refer to (or would, if ABC hadn't burned the site down) is their "Check Our Work" page, which sorts all of their predictions across politics, sports, etc., bucketed by the odds they gave, and then showed the percentage of all outcomes that were accurately predicted. It was remarkably accurate -- i.e., races where a candidate was given 65% odds were indeed won by that candidate approximately 65% of the time. This was true from the surefire 95% odds (which did fail about 5% of the time) down to the 5% longshots (which pulled an upset about 5% of the time).
Here's the last archived version of the page:
https://web.archive.org/web/20250306183754/https://projects....
Not to mention, when you use a Monte Carlo model, you can easily count the samples which lead to certain outcomes. In their review, they noted that the correlated polling miss in the Midwest was one of the most common scenarios making up that 35% chance of a Trump win.
The idea that Silver somehow 'hit' in 2012 when he correctly predicted all states and 'missed' in 2016 is so juvenile I get second hand embarrassment whenever I see it.
+1. It's insane how history has been rewritten here, though it probably helps that 95% of people don't understand what the fuck a forecast is. They translate "30% chance Trump wins" into "0% chance Trump wins" because 30 < 50. Try asking them whether they think a 45% chance of rain means it definitively isn't going to rain and hear the wind whistling through the empty space between their ears.
You know what... just down below there's a comment saying Nate Silver was always wrong, just like meteorologists...
¯\_(ツ)_/¯
Normies very bad at the concept of statistical calibration. News at 11!
But yes, I agree with you that it's surprising to hear people on Hacker News having the 180 degrees wrong impression that the general population appears to have taken away from the one thing normal people care about polling for: during presidential elections.