And what numbers they were. His "prediction"--though really just the most likely probability among many scenarios offered by his model--nailed the electoral college total on the night. (more…)

]]>And what numbers they were. His "prediction"--though really just the most likely probability among many scenarios offered by his model--nailed the electoral college total on the night. (more…)

]]>In particular, you have to remember that, while Nate Silver gives President Obama a 77.4 percent chance of winning the presidential election, that's not the same thing as saying that Obama is going to win.

]]>Suppose a weather forecasting model predicts that the chance of rain in Chicago tomorrow is 75 percent. How do we determine if the model produces accurate assessments of probabilities? After all, the weather in Chicago tomorrow, just like next week’s presidential election, is a “one-off event,” and after the event the probability that it rained will be either 100 percent or 0 percent. (Indeed, all events that feature any degree of uncertainty are one-off events – or to put it another way, if an event has no unique characteristics it also features no uncertainties).

The answer is, the model’s accuracy can be assessed retrospectively over a statistically significant range of cases, by noting how accurate its probabilistic estimates are. If, for example, this particular weather forecasting model predicted a 75 percent chance of rain on 100 separate days over the previous decade, and it rained on 75 of those days, then we can estimate the model’s accuracy in this regard as 100 percent. This does not mean the model was “wrong” on those days when it didn’t rain, any more than it will mean Silver’s model is “wrong” if Romney were to win next week.

What Silver is predicting, in effect, is that as of today an election between a candidate with Obama’s level of support in the polls and one with Mitt Romney’s level of support in those polls would result in a victory for the former candidate in slightly more than three out of every four such elections.

In particular, you have to remember that, while Nate Silver gives President Obama a 77.4 percent chance of winning the presidential election, that's not the same thing as saying that Obama is going to win.

]]>Suppose a weather forecasting model predicts that the chance of rain in Chicago tomorrow is 75 percent. How do we determine if the model produces accurate assessments of probabilities? After all, the weather in Chicago tomorrow, just like next week’s presidential election, is a “one-off event,” and after the event the probability that it rained will be either 100 percent or 0 percent. (Indeed, all events that feature any degree of uncertainty are one-off events – or to put it another way, if an event has no unique characteristics it also features no uncertainties).

The answer is, the model’s accuracy can be assessed retrospectively over a statistically significant range of cases, by noting how accurate its probabilistic estimates are. If, for example, this particular weather forecasting model predicted a 75 percent chance of rain on 100 separate days over the previous decade, and it rained on 75 of those days, then we can estimate the model’s accuracy in this regard as 100 percent. This does not mean the model was “wrong” on those days when it didn’t rain, any more than it will mean Silver’s model is “wrong” if Romney were to win next week.

What Silver is predicting, in effect, is that as of today an election between a candidate with Obama’s level of support in the polls and one with Mitt Romney’s level of support in those polls would result in a victory for the former candidate in slightly more than three out of every four such elections.