Nope, Barack Obama and Mitt Romney didn't finish in a dead heat in Arkansas in 2012. Photographer: Scott Eells/Bloomberg
Nope, Barack Obama and Mitt Romney didn't finish in a dead heat in Arkansas in 2012. Photographer: Scott Eells/Bloomberg

A fuss erupted this morning after a New York Times Upshot/Kaiser Family Foundation poll showed relatively positive results for threatened Democratic incumbent senators in Arkansas, Louisiana and North Carolina, along with mixed news for Senate Minority Leader Mitch McConnell's re-election in Kentucky. Naturally, Republicans started pulling the poll apart.

The result was a Bill Kristol post attempting to discredit the whole thing because a separate question asking people who they voted for in 2012 returned a dead heat in Arkansas, even though Mitt Romney won the state in a landslide. So that 10-point Mark Pryor lead the poll reported? “A reputable news organization would have looked at question 12 and thrown the poll out,” Kristol said.

It’s nowhere close to that simple. First, though it wasn’t described carefully, the Upshot’s Nate Cohn now says that the 2012 vote question is reported for all adults, while the the Senate race is reported for registered voters. That probably accounts for some of the apparent disparity. It’s also well-established that people lie about whether they voted in previous elections, and about who they voted for -- and faulty memories (or whatever) bias the results in favor of the election winner. (Here’s an old Mark Blumenthal post responding to complaints from Democrats about a similar poll.) We should expect any poll of all adults in 2014 to overstate Barack Obama's 2012 results, perhaps by a lot. Cohn points out, too, that on other questions (Obama's approval ratings, for example) the answer seem about right.

The poll still could be wrong. But as Greg Sargent pointed out, “It's totally legit to question NYT poll. But instead of ‘skewed sample,’ better argument is: ‘Stick to polling averages.’”

In this particular case, most recent polls of the Arkansas Senate race have been conducted by partisan shops, making it hard to find direct comparisons, and that means the polling average is going to be relatively unreliable. Still, HuffPollster has 23 polls in its database for this race, and estimates (this survey included) a 3-point Pryor lead. In other words, NYT/Kaiser probably is an outlier on this question.

So should Upshot have “thrown the poll out,” as Kristol suggested? The answer is absolutely not. Saying the poll is likely to be “wrong” isn't the same as saying it contains no information. The way the math works, a 10-point lead for a candidate in one poll is a lot more plausible if the real lead is, say, 6 points, than if the race is actually tied. A survey will tug the polling average in one direction, and the next ones will either confirm that direction, or push it back the other way.

The biggest difficulty in interpreting surveys is that, in most cases, we don’t know what the full population believes.That’s why we do surveys. We know there’s been a lot of campaigning in Arkansas recently, as Pryor has attacked his Republican challenger, Tom Cotton. Reading the polling results is the only way to find out the effects of the campaigning. And we know – with absolute certainty – that not only will those polls float around the true result, but that some of them, by dumb luck, will be far off.1

That's true even if the sample is a perfect match for the population on demographic or other measures. Indeed, the whole idea of rooting around in the details of the poll to "prove" that there's something weird about it is the wrong way to go. It doesn't really matter why this one is 7 points from the polling average; all that matters is what we do with the result.

The Times did get something wrong. The write-up by Jonathan Martin and Megan Thee-Brenan should have set the results, and particularly the Arkansas (likely) outlier, in the context of other recent polls. As too many news outlets do after commissioning a poll, the Times portrayed the survey as the new reality, rather than one additional piece of evidence, and in this case the evidence by itself appears to be fairly misleading. That’s just bad reporting.

Nonetheless, they shouldn't have thrown out the results. When it comes to election polling, you don’t ignore outliers, you toss them in with all the other findings, and you wind up with a pretty reliable result.

1 There can be more than just dumb luck involved; pollsters can exhibit “house effects” that reveal small biases for one party. Some polling aggregators and prediction models take that into account. Upshot/Kaiser is new, and we can’t know if that may be happening. But it’s highly unlikely that a house effect accounts for much of their result. Usually, house effects are less than five percentage points.

To contact the writer of this article: Jonathan Bernstein at Jbernstein62@bloomberg.net.

To contact the editor responsible for this article: Max Berley at mberley@bloomberg.net