After trailing in the polls right up to Election Day, the BC Liberals won another mandate. Another provincial election that the media and public are calling a “surprising” result. The question arises: Was this really surprising? According to the polls, it was. Now, we are amidst a host of pollster “mea culpas” and those who are saying they got it “less wrong than the others.” Herein lays a new problem for the media, politicians and the public: our faith and belief in the accuracy of political polling.
First, let me disclose that I do conduct public opinion research as part of my business, and have done political polling in the past. Personally, I have long regarded this as a looming problem, one that has become a major problem over the last two years.
So, what is going wrong with all these polls?
Political polling is traditionally a test of accuracy of a public opinion research firm. It is considered a loss leader (typically below cost), with the expectation that an accurate poll result would build a firm’s reputation and attract new and more profitable business. Over the last decade, this premise has changed dramatically.
With the rise of the software-based polling methods – notably online polling using internet panels of self-selected respondents, as well as interactive voice response (IVR) systems (typically referred to as “robodials” by the public) – the cost of entry for new methods and firms has never been lower. Gone are the days of excellent response rates to telephone (landline) polls. And gone are the days of predictably engaging the public to garner their political inclinations.
Costs matter. We are in a new world – low voter turnouts, multiple communication technologies, social media platforms, and the use by parties of geo-demographic targeting and sophisticated voter identification methods to find supporters. These have dramatically affected the political polling business, and pollsters have been slow to adjust and/or they are not evolving their skills.
There is that old saying – “Fast, cheap and good. Pick two” – which is truly applicable here. While corporations typically choose a combination of fast or cheap with good, media outlets have opted for fast and cheap. The business model of polls and the media has evolved. Media are currently either cash-strapped or losing money, and thus, in most cases, either do not pay for political polling or pay for access to polls already conducted. I have colleagues who said they don’t conduct polls for the media unless they get paid. Well, there are a lot less of their polls in the papers now. During the last Alberta provincial election, a regional newspaper approached my firm to conduct a poll. We provided a quote to which they responded by asking if we could do it for free as “it would be good for your reputation.” We thanked them for the offer and declined.
Pundits play favourites: There is the additional dimension of politicos and the media’s obsession with the “horse race.” Many column inches are taken up with the analysis of poll results and insights from pollsters (some of you may include this article in that category as well). While these stories do capture the pulse of an election, they do not take into account the overall election ecosystem and the body politic.
There are deeper issues at play here.
After the 2012 US national election, Nate Sliver, most likely the most famous political statistician at this moment, published an eye-opening analysis of all the polling data collection methodologies and pollster accuracy. The findings were revelatory – fast and cheap methods had larger respondent biases (by supporters of specific political parties) and were less accurate.
Surprisingly, the best-performing poll was the Columbus Dispatch’s old-school mail survey. Overall, live telephone operator and internet panel polls performed significantly better than robodials. These methods were better at establishing a more population-representative sample that captured the diversity of opinion and voting behaviour. However, they are also significantly more expensive than the cheap-to-operate, large sample, conducted-overnight robodials. Clearly there is a trade-off here. This leads to three dimensions of polling itself:
What to ask in polls? Polling, in its cheapest form, focuses on the horse race. But elections are more than that. They are tests of political parties’ brands, the public’s confidence in the economy and their governments’ stewardship, the alignment of voters’ values with parties, and societal trust. Quality polling captures these elements, and how they wax and wane during the writ period. Quality polling also entails more in-depth, statistical analysis that addresses aspects such as tests of correlation and voter segmentation – aspects that Nate Silver and his more methodical contemporaries embrace.
Political war rooms use a variety of tools.There is an inherent misalignment between pollsters and party war rooms. Pollsters have polls. War rooms have polls, plus social media monitoring platforms, feedback from their ground network, content analysis of media coverage, text analysis of editorials and public comments, as well as voter identification systems. Pollsters mostly ignore this latter element, but parties are investing heavily in it. The Conservative Party of Canada use CIMS, while others are using Obama’s platform of choice, NationBuilder. These platforms are meant to address a question rarely considered in the media: What is a party’s secure and confirmed vote? Polls are not designed to capture this data, but voter identification is playing a larger role in election outcomes. Some parties are clearly better at getting their vote mobilized and to the polls on Election Day. The Conservative Party of Canada’s 2012 federal majority is a testament to this.
Further, data triangulation – finding the best insights across multiple sources – has always been a skill amongst the best war room teams. It is no surprise that data scientists – those with triangulation, interpretation and communication skills – are much sought after by political parties. Their talents are becoming more useful than those of the traditional party pollster.
A consistent misalignment of voter intentions and voter turnout. In most cases, answering a poll is not akin to actually voting. Polling exposes social desirability bias – I say I vote because it is the right thing to say, even if I don’t actually vote. Saying you want change and voting for change are independent events. This was evident in all of these “surprise” results. In my opinion, the real metrics that matter relate to the committed/intending voter. These are poll respondents who have a history of voting (themselves and in their family tradition) and intending to vote on Election Day. In my analysis of polls from these “surprise” results, while it may result in a small respondent base with a higher margin of error, this number was a better predictor of voter turnout. Observing this metric within the content of the BC and Alberta elections, there were warning signs that things turned for the eventual winner earlier than what most pollsters believed.
While the emergent media/pollster business model requires careful examination, the current business model of the media overrides any quick resolution of the “fast and cheap” polling problem. It does, however, exacerbate the biggest problem for pollsters – one facing political parties and democracy itself: low voter turnout. BC is flirting with the 50% floor, and Alberta saw turnout drop to 41% in 2008. Another question arises: Is the silent majority of non-voters (considering them a block) satisfied with this situation? There is much research into this, but no matter what, their ranks are growing, and no amount of suspect polling is going to solve that problem.