Time to return to an electoral literacy project: how to read election projection models.
This comes up because Josh Kraushaar picked a fight with Nate Silver and the Upshot projections this morning:
[C]ount me underwhelmed by the new wave of Senate prediction models assessing the probability of Republicans winning the upper chamber by one-tenth of a percentage point. It's not that the models aren't effective at what they're designed to do. It's that the methodology behind them is flawed. Unlike baseball, where the sample size runs in the thousands of at-bats or innings pitched, these models overemphasize a handful of early polls at the expense of on-the-ground intelligence on candidate quality. As Silver might put it, there's a lot of noise to the signal.
Here’s the thing: It should be possible for a careful observer to beat the projection systems. But first, you have to understand what they’re saying.
As I explained last week, what the Senate models are telling us is, “here is what we should expect in November, if current conditions (including expected conditions) hold.” Those conditions include both national forces -- the economy, the president’s popularity, political context issues such as midterm vs. presidential year -- and individual contest factors, such as the partisan balance of the contested states and the strength of the candidates.
There are two types of uncertainty, then: the normal uncertainty in any statistical estimate, and the additional uncertainty about whether the conditions of the electoral cycle will match what the model expects right now. Trying to beat the first kind is a sucker’s bet; it’s like betting that “heads” will come up 60 percent of the time with an honest coin.
But it is possible to beat the second type of uncertainty. It’s certainly possible that an expert could predict economic conditions later this year better than whatever the various systems plug in for their predictions, for example. And it’s certainly possible that an expert political observer could do better than the objective indicators that these models generally use for candidate quality.
It’s also possible, of course, that the models are poorly constructed in some way. That’s what Kraushaar claims -- that the models undervalue national conditions and overvalue the smattering of individual election head-to-heads. But that doesn’t square with the models’ own explanations. For example, the Upshot says, “when polls are sparse or when the election is still months away, we stick closer to the background information.” Both FiveThirtyEight and the Monkey Cage make similar choices. Of course, it’s possible that they all still put too much weight on those polls, but it’s worth noting that these are all empirically-derived models; in other words, they’re developed by looking at past results. That can be very wrong when conditions change, but Kraushaar needs some sort of argument here, not just an assertion.
Again, I think an expert observer in principle can beat the expectations derived from the models. But Kraushaar comes up short there, too. He claims that, “The models also undervalue the big-picture indicators suggesting that 2014 is shaping up to be a wave election for Republicans, the type of environment where even seemingly safe incumbents can become endangered. Nearly every national poll, including Tuesday's ABC News/Washington Post survey, contains ominous news for Senate Democrats.” But the models do pay attention to national condition polls, and apparently more carefully than Kraushaar, who singles out a poll that came in unusually low (rather than, say, Obama’s recent improvement in Gallup polling). Yes, Obama’s approval is weak, but it’s been rising all year, and at any rate national conditions certainly are included in the projections.
The biggest source of model error for which humans should be better than machines at this point is if objective factors (candidate experience, for example) turn out to be misleading -- a statewide officeholder who appears to be unusually incapable of running a campaign would be overvalued by the computers, while a first-time candidate with unusually strong electioneering skills would be undervalued. But there’s relatively little of that in Kraushaar’s piece. He does assess some of the individual candidates (which is good!), but he doesn’t appear to be comparing them to how the models actually treat them. And he returns again and again to a claim that the models are underplaying the possibility of a Republican landslide without identifying anything at all that he’s seeing and that the models are not seeing.
To contact the author on this story:
Jonathan Bernstein at email@example.com
To contact the editor on this story:
Tobin Harshaw at firstname.lastname@example.org