Nate Cohn looks under the hood of some of the polls that use automated calls to gather data. He finds that the methods that make the results problematic are the very methods that enable those pollsters to take so many surveys.
The question is what poll readers should do about it; or, more properly, what forecasters and polling aggregators should do. As readers, we should simply pay as little attention as possible to individual surveys and instead just look at polling averages.
In fact, looking under the hood may be the wrong way to go. It may make for an interesting story, but unless the answer points to fraud (not the case here), it doesn’t really matter why any particular pollster gets it wrong. What matters is whether they are systematically biased in any predictable way.
Poll averagers have two tools to deal with this. If a pollster systematically favors one party, that will create a house effect that can be adjusted for. If, on the other hand, a pollster is all over the place with no particular consistency, averagers then can employ a weighting scheme to downplay that outfit’s results.
I’m not saying that investigations such as Cohn's aren't worthwhile. For example, the automated-polling companies, which cannot reach mobile phones, may have a systematic bias against younger people that will play out differently in different states depending on demographics. Examining that carefully might uncover a complex house effect, which is best handled with something other than an across-the-board adjustment. The better the Upshot understands what’s happening with various polling outfits, the better its Senate forecaster is going to work.
But for most of us, most of the time, the correct reaction to a goofy-looking poll isn’t to figure out how it went wrong; it’s to wait for one of the polling aggregators to toss it into the mix, and then look at the average. We’re all prone to cherry-picking when it comes to polls, and partisans on both sides will always be able to “unskew” any particular survey. Ignore those unskewers and keep the focus on polling averages.
Here, I mean bias in the technical sense: whether the results tend to exaggerate support for one particular candidate or party. It doesn’t matter what political opinions the pollster holds.What matters is the results.
To contact the author on this story:
Jonathan Bernstein at email@example.com
To contact the editor on this story:
Max Berley at firstname.lastname@example.org