Perspectives on Polling: Part 3 of a 4-part series – Calgary Centre: An Odd Case of Public Engagement

Calgary, Alberta

Based on articles published in the Globe and Mail and on the Market Research and Intelligence Association (MRIA) blog, this extended 4-part series looks at what’s wrong with political polling in Canada (and elsewhere) and asserts that it can and must be fixed. Drawing on his own experience in both the political and market research arenas, and from his interviews with thought leaders and pollsters from across Canada and the US, Brian F. Singh critiques conventional polling methods that are perpetuated by pollsters and passed on to the public by the media, and concludes with a 5-point call to action for the market research industry.

*  *  *

Part 3 – Calgary Centre: An Odd Case of Public Engagement

I reside in Calgary, where there was a by-election last November in the Calgary Centre riding. Given the restricted geography, the only polls that were being done were telephone-based – via either live operators or IVR, with an emphasis on the latter.

The initial IVR polls indicated a strong position for the Conservative Party of Canada candidate. My sense was that given the nature of the candidate, as well as that of the riding, the polls may be overstating the strength of this candidate; given that this was a by-election, there was likely going to be a low turnout, and thus, less predictive power of the polls. As the writ was dropped and we got closer to election day, more IVR polls were being conducted. This being an isolated riding, as IVR polls got underway the parties started to send out messages via social networks to their supporters asking them to answer their phones.

The results were intriguing: as more people were aware that polling was being undertaken, they were more inclined to respond to them. What is fascinating is that the IVR polls started to perform really well; in fact the last one that was released by Forum Research practically called the result, including the slight lead for the Conservative candidate on election day.

Thus, the notion of being aware of polls and their importance within a particular jurisdiction led to some vested interest on behalf of the public to respond to that. While the voter intention rate was high, the actual distribution was more realistic compared to anything else I have seen in IVR polling within Calgary.

This was a one off event, but an interesting one. The underlying consideration here is: if voters are aware of when polling is undertaken, regardless of inclination, could it improve quality and response rate?

The Matter of Calibration

In the future, IVR, online and mobile methodologies will likely predominate given the cost. However, these do not address the issue of calibration: how do we establish methods that are able to better represent the population and generate higher response rates? Part of the challenge is to calibrate the population over the course of the election cycle, in order to better grasp what the reality of their voting behaviour is, to be able to identify who is voting, and then zero in on that population to better grasp what their intentions are.

Yes, this is a modeling consideration, but we need to calibrate with known numbers – such as mobile phone ownership, vehicle ownership and home ownership, which are firm numbers that we always use to assess quality of the data. This will lead us to a better position for the future as we evolve methodologies and do more research on research in future polls.

The Rising Role of Aggregators

For this article I reviewed Nate Silver’s analysis of polls, and as well I interviewed Éric Grenier. These two individuals are aggregators of surveys. They rely upon the data and the quality of that data to do their jobs. The question arises at this point: are they garnering more attention than the pollsters who are doing the polls themselves? This is another proverbial ‘train that has left the station.’ The reason why they have gained more media clout is that they have become more agnostic across polling methods and have factored that into their analyses – especially on their assessment of the quality of polls themselves. They have become our de facto third party polling evaluators.

This also presents a problem for them: to keep doing their jobs they are dependent upon our work; moreover, they are not paying for that work and we are not gaining on our reputation other than through the validation they provide for the polls that our industry releases. Henning reflects on this:

“Nate Silver, Huffington Post, Talking Post Memo, and Real Clear Politics. There are four aggregators right there that probably get more attention combined than traditional polls and for good reasons I think. It does create an interesting Tragedy of the Commons where why should I do the hard work and do the increasingly expensive work to do it right, in terms of cell phone sampling and everything, if I am just going to end up subsumed into somebody else’s model.”

Is this a major problem for the industry? My belief is that it isn’t, but I do feel that we need to collaborate with such individuals to have them at the table and give feedback based on their usage of the data that we generate.

The Importance of Disclosure and Transparency

Darrell Bricker is blunt about the state of our trade:

“This is one of the problems I also have with Canadian pollsters – we kind of get obsessed about what is happening here and we don’t necessarily learn from what is happening in other countries. I mean these aren’t new phenomena or new issues.

This is again about disclosure, research on research, being mature about our responsibilities as pollsters, as social scientists, to explain and to open ourselves up to critiquing criticism by people who are actually having an informed discussion about this stuff.”

Based on experiences of the jurisdiction, there is much learning to be grasped, especially from a publishing standpoint. There are the Associated Press Style Book on polls and surveys, the New York Times polling standards, and the BBC’s guidance on opinion polls, surveys, questionnaires, votes and straw polls. These are excellent resources but they are rarely used. What has generally prevailed is the notion of a margin of error, and we have seen margin of error being quoted on all forms of surveys. While some consider this questionable, the bigger question is what really is margin of error?

Margin of error is about the error within the data based on the reproducibility of the survey using the same exact methodology. What has happened is that horse race questions have been fielded across the various methods, all quoting margins of error within the same timeframe, and yielding dramatically different results. Thus, we have a disclosure issue that we have to address so that there can be more external assessment and evaluation of a poll. The onus should be upon the pollster to publish a more complete snapshot of the methodology, the questionnaire and the dataset, in order to allow individuals to assess the quality of the poll. This would always remain problematic as there are commissioned polls and there are polls that may be tacked onto an Omni that lack clear accountability.

I am not advocating that people can’t conduct polls independently; we should never lose that right. However, the more that we disclose what we are doing and the motivations for doing some of these things, the better we can focus on the quality of the polling and assess any areas where things may be slipping quicker so that we can address them more effectively across time – but specifically, and especially, within an election cycle.

Bricker, likely one of the loudest voices in Canada on this issue, reinforces this perspective on disclosure and its motivation for self-reflection:

“This is again about disclosure – research on research, being mature about our responsibilities as pollsters, as social scientists, to explain and to open ourselves up to critiquing criticism by people who are actually having an informed discussion about this stuff.

“…When you go to AAPOR and see the absolute disclosure that they demand, all of the information that the pollsters give to them, how they dismiss people who play these kind of games and don’t even include them in the averages, that is the way we should be.”

Éric Grenier pointed out Nate Silver’s revelatory analysis prior to the 2012 National Election: he had found that a more transparent a polling firm was, the better the results were. Food for thought.

* * *

Next up – Part 4: Where do we go from here?

No Comments Yet.

Leave a Reply

Your email address will not be published. Required fields are marked *