Perspectives on Polling – Part 2 of a 4-part series: The horse race ignores context and nuance

Horse-race polling

Based on articles published in the Globe and Mail and on the Market Research and Intelligence Association (MRIA) blog, this extended 4-part series looks at what’s wrong with political polling in Canada (and elsewhere) and asserts that it can and must be fixed. Drawing on his own experience in both the political and market research arenas, and from his interviews with thought leaders and pollsters from across Canada and the US, Brian F. Singh critiques conventional polling methods that are perpetuated by pollsters and passed on to the public by the media, and concludes with a 5-point call to action for the market research industry.

Part 2: The Horse Race Ignores Context and Nuance

What to ask in polls? Polling, in its cheapest form, focuses on the horse race. But elections are more than that. They are tests of political parties’ brands, the public’s confidence in the economy and their governments’ stewardship, the alignment of voters’ values with parties, and societal trust. Quality polling captures these elements, and how they wax and wane during the writ period. Quality polling also entails more in-depth, statistical analysis that addresses aspects such as tests of correlation and voter segmentation – aspects that Nate Silver and his more methodical contemporaries embrace.

Horse race polling, which has become the predominant form reported in the media, neglects the nuances of motivations that influence voters as to which candidate or party they will support on Election Day. One that arose during my discussions with Barb Justason and Frank Graves was the notion of the economy. What we have seen consistently is that if people are generally satisfied or that there is a slight doubt about changing course on the economy, they tend to prefer the incumbent. Frank elaborates:

“…there’s a fragile economy out there, let’s not risk the adventure of a new government at this time. I believe that that factor was the same factor we saw in Ontario, the same factor we saw in Quebec and probably the same factor that was going on in Alberta.”

And Barb Justason shines a light on this question of the economy and how it was leveraged by the B.C. Liberals.

“Looking back the BC Liberals, their campaign came together and that message from Christie Clark throughout the campaign – the economic message that was packaged into this nugget of not leaving debt to our children, I think that resonated and I think that really took hold, especially following the debate.”

We saw this emerging in Alberta, barring some gaffs by the Wild Rose Alliance, and in BC, when after the debate, Christy Clark fine-tuned her messaging to emphasize their stewardship of the economy and how it has influenced the current and future quality of life for BC residents. Clearly, anger and complacency can coexist, and work to the benefit of a well-organized incumbent. This is also one of the reasons why the Conservative government continues to hammer home the point that they are good stewards of the economy (relative to other parties) through various talking points and its Economic Action Plan campaign – this is an ongoing going campaign designed to appeal to their support base, and core tactic in their preparation for the 2015 Federal Election Being perceived strong on the economy resonates with committed voters.

Given the variation that we have witnessed across all polls with an election cycle, other than the horse race, there is the relative position of the players in that horse race that has garnered the attention of some of the interviewees. This is becoming an interesting problem as it reinforces the need for better context within surveys, and the sharing of such questions, so as to have comparative data to understand the dynamics of an election. Michael Marzolini, president of Pollara, intimated in a CBC interview that the polling has become lazy – it’s missing out on the careful nuance that campaign teams undertake to do their job properly, but also short-changing the public on where the true dynamic of an election is, rather than the horse race itself.

Consideration should also be given to adding new types of questions that capitalize on our social instincts to herd and take social cues from our peers and the crowd. Voting, in many regards, follows social norms and herding effects. Some may say that the NDP’s Quebec “Orange Crush,” driven by social cues, was a result of the herding effect.  John Kearon, Founder & Chief Juicer at BrainJuicer, indicated that the question “How do you think your neighbours will vote?” was a better predictor of the outcome of the recent Italian national election compared to traditional horse race questions.

Party brand is also at play. How this nuance manifests itself in populations is largely neglected in horse-race centric polling. How ingrained is this? I collaborate with Dr. Paul Zak, at the Centre for Neuroeconomic Studies at Claremont Graduate University in Southern California, and we have noted that no matter what scandal or demonstrated weakness arises, the Republican/Conservative vote in the U.S. has remained relatively consistent over the last decade. There is a body of research emerging from neuroscience that is suggesting that some people are firmly conservative voters and do not consider any other party as a viable option. The data in Canada does tend to support this, and Conservative parties have been remarkably effective in polarizing the electorate, and finding and mobilizing this vote. While putting EEGs on voters may pose a challenge, we as pollsters have to be able to dig into linguistic methods to explore findings from other disciplines to improve the design of our polls.

The Pollster/War Room Dichotomy – Voter Identification & Data Triangulation

There is a perspective that has become pervasive, especially amongst campaign strategists and teams; they state that they don’t listen to the polls. This is utter nonsense. They all have pollsters on the team, and if they don’t it’s because they can’t afford them. The true context of that statement is that their internal polls are being conducted differently.

Political war rooms use a variety of tools. There is an inherent misalignment between pollsters and party war rooms. Pollsters have polls. War rooms have polls, plus social media monitoring platforms, feedback from their ground network, content analysis of media coverage, text analysis of editorials and public comments. They parse these data in a much more strategic way to suits their needs. Those who do it well have established a track record among senior members of the team of being very strategic and very collaborative. Further, data triangulation – finding the best insights across multiple sources – has always been a skill amongst the best war room teams. It is no surprise that data scientists – those with triangulation, interpretation and communication skills – are much sought after by political parties. Their talents are becoming more useful than those of the traditional party pollster. Thus, teams are focused on ensuring the usefulness of their polling to assist activities such as messaging and, more importantly, voter identification and getting out the vote. As stated above, the Conservatives have demonstrated remarkable guile on this front over the last three to five years.

Pollsters who published polling results during the course of the Alberta and BC elections appear to be in unison saying that there was last minute movement by the electorate. This too is nonsense and an attempt to blame the electorate for being fickle. If we delve into other data associated with party loyalty, comfort with the economy, and motivating factors to vote, it appears that most voters may have been initially disenfranchised with the incumbent but return to them come election day. Much of this has to do with the notion of change, with feeling comfortable with the state of the economy or not wanting to shake it up too much, as well as with voter identification.

I continue to harp on this issue of loyalty, as this is a card that the best organized parties always have in their back pocket. They are much better at identifying the committed voter. In fact, conservative-leaning parties across the country invest substantial resources to this, with the Conservative Party of Canada having their CIMS database that is readily shared across the country with other conservative-leaning parties, including the BC Liberals. We are seeing tools evolving such as (Barack Obama’s platform of choice) NationBuilder and Track and Field, exclusively for voter identification. These platforms are meant to address a question rarely considered in the media: What is a party’s secure and confirmed vote? Polls are not designed to capture this data, but voter identification is playing a larger role in election outcomes. Some parties are clearly better at getting their vote mobilized and to the polls on Election Day.  We as pollsters continually ignore what this is, but not for the sake of having to account for the tool, but the very notion of modeling.

This said, Grenier reinforces the case for quality independent polling in the media:

“If you don’t have accurate media polls, you can have the narrative driven by the internal polls and you don’t have a way to fact check them… there is no way to do an independent poll to figure out what was actually going on. That is one of the reasons why media polls need to be around, but also they need to be done right otherwise it’s worse than having no information at all.”

Modeling: A Preferred Method?

Éric Grenier, on the problems in the polls prior to the B.C. Election:

“On the one hand there were some problems with getting a representative sample for some reason in BC, especially with the online polls. For one reason or another, maybe the panel was not as representative as it could have been. There might have been some sort of … the weighting issues, when you look at how you are going to weight the poll – some places were weighting it according to how voting was happening in the last election, some were applying less important weighting.”

There is a consistent misalignment of voter intentions and voter turnout. In most cases, answering a poll is not akin to actually voting. Polling exposes social desirability bias – I say I vote because it is the right thing to say, even if I don’t actually vote. Saying you want change and voting for change are independent events. This was evident in all of these “surprise” results.  In my opinion, the real metrics that matter relate to the committed/intending voter. These are poll respondents who have a history of voting (themselves and in their family tradition) and intending to vote on Election Day. In my analysis of polls from these “surprise” results, while it may result in a small respondent base with a higher margin of error, this number was a better predictor of voter turnout. Observing this metric within the content of the BC and Alberta elections, there were warning signs that things turned for the eventual winner earlier than what most pollsters believed.

All interviewees, like me, are firm believers that the ultimate challenge, as noted above, is to accurately model the voting population. This is the area where pollsters within a party’s or candidate’s campaign team focus on; they are not going to look at what they have sewn up or what is unattainable, but rather where the swing is and where they can potentially capture some of the committed vote. They don’t waste time on potential voters who are not committed or intending to vote. In fact, Darrell Bricker pointed out that one IVR poll in the US during the last national election started off with the statement, ‘If you do not intend to vote in the election, please hang up.’ He indicated that this was a successful method for addressing this modeling consideration.

With the challenge of identifying who is going to vote, there is increasing attention towards modeling this population. There is an appetite for doing more predictive and scenario-type analyses to assist in the determination of appropriate weighting procedures that can be developed for pollsters. Long considered the secret sauce of various pollsters, this simply does not cut it any longer. We are seeing entities such as Google bringing Bayesian-type approaches to identifying polling participants to the table and performing reasonably well in elections polling. Their method is fairly transparent, but it does indicate that more tools like this will be developed which can generate more consistent tracking, as well as higher response rates, especially on the notion of survey stitching – that is, how we can use online environments to continuously poll individuals without returning to the same individual twice. Thus the imputation of who an interested and committed voter becomes an interesting consideration for the industry.  Further, given the reams of data that can be generated from such an undertaking, this leads to further creativity in how certain communities could be developed that can provide an assessment of the quality of data relative to the general population. This will further challenge the media. However, campaign teams will love and adapt to these tools quickly.

The challenge is that this is imposing an abstraction of reality on the data, but a much-needed one, for pollsters to do their job better. With a voter turnout in BC of 52%, and in Alberta in 41% (2008) and 57% (2012), we are practically at the point where it’s a 50/50 shot of predicting whether or not an adult citizen is going to vote or not. This makes our job exceptionally hard. It also makes the ability to predict what is going to happen even harder if we are still bound by the illusion that we are working with a probability sample. As pointed out by Nate Silver in the analysis of various polls, there are inherent biases with different methods, but also regional discrepancies within those methods. One term that is becoming popular in polling is “horses for courses” – certain methods work better in certain regions – and understanding the mix of modes becomes the imperative of a pollster, even if it is also more challenging to explain.

Justason reflects on her recent experience within this context of better modeling.

“We need to adjust the data to make sure we have done it and we need to learn from our mistakes. We also need to acknowledge that we have made mistakes and get off this high horse that somehow the electorate – almost implying the electorate made a mistake – that kind of thinking, we rely on the general public to talk to us and to help us with this kind of thing and to go back on them and say somehow they are the reason that our industry goofed on this twice now is disingenuous.”

* * *

Next up: Part 3: Calgary Centre: An odd case of public engagement

No Comments Yet.

Leave a Reply

Your email address will not be published. Required fields are marked *