Perspectives on Polling – Part 1 of a 4-part series: The problem with polling (and pollsters)

The problem with polling

Based on articles published in the Globe and Mail and on the Market Research and Intelligence Association (MRIA) blog, this extended 4-part series looks at what’s wrong with political polling in Canada (and elsewhere) and asserts that it can and must be fixed. Drawing on his own experience in both the political and market research arenas, and from his interviews with thought leaders and pollsters from across Canada and the US, Brian F. Singh critiques conventional polling methods that are perpetuated by pollsters and passed on to the public by the media, and concludes with a 5-point call to action for the market research industry.

*  *  *

Foreword

The market research and intelligence industry has a problem with political polling. A problem that needs clear thought and action.

This article has been three years in the making. While I did political polling before, it was a new experience doing it for a political campaign. Amidst a discussion of a “surprise result” in Calgary’s 2010 municipal election, I realized that political observers and the media were ignoring some basic fundamentals in understanding and analyzing polls. I also realized that it was useless to consider an entire electorate when turnout was projected to be low. This latter phenomenon has driven my interest in the state of polling in Canada.

The facts cited here – from provincial elections in Alberta and British Columbia – are clear, and I encourage readers to review the results discussed.

This is primarily an opinion piece. It is also a different article than originally envisaged. I had planned to write an assessment of polling practices and their reporting. However, the BC election of May 14, 2013, resulted in following up with North American thought leaders to explore the dynamics of this and other recent elections, and delve into the problems of the polling ecosystem itself. While it is easy to criticize, I have taken the opportunity to develop an agenda for action.

In preparing this I reviewed numerous reports of polls in the media, industry and association reporting and compliance protocols, interviewed thought leaders on the topic and delved into my own analysis and commentary at my blog, in The Globe and Mail and on CBC Radio and Television. Thought leaders interviewed included:

  • Dr. Keith Brownsey, Professor, Policy Studies, Mount Royal University, Calgary;
  • Barbara Justason, Principal, Justason Marketing Intelligence, Vancouver (@barbjustason);
  • Éric Grenier, Founder, ThreeHundredEight.com (@308dotcom);
  • Dr. Darrell Bricker, Global CEO, Ipsos Public Affairs, Toronto (@darrellbricker);
  • Jeffrey Henning, CEO, Researchscape, Norwell, MA (@jhenning);
  • Frank Graves, President and Founder, EKOS Research Associates, Ottawa (@VoiceofFranky); and,
  • Michael Mokrzycki, President, Mokrzycki Survey Research Services, NE Massachusetts (@mikemokr).

Much of the dialogue on polling – its problems, quality protocols and future – is happening on Twitter. I encourage all readers, if you have not done so already, to follow these individuals.

Part One: The problem with polling – and pollsters

We have a problem with political polling in Canada. Poor polling is affecting our reputation and the public is losing trust in our trade.

The signs have been around for a long time. Polling is at a crossroad. But the reality is that it will be at an eternal crossroad. And we need to invoke a balance of tradition and innovation to grasp how to do more accurate and relevant political polling.

With the focus now on horse-race polling, we do ourselves a disservice by trying to simplify complex phenomena and skip the nuance of finding the truth within the embedded data.

Let’s rewind to May of this year…

After trailing in the polls right up to election day, the BC Liberals won another mandate – in another provincial election that the media and public are calling a “surprising” result.  The question arises: Was this really surprising? According to the polls, it was. We witnessed yet another round of pollster “mea culpas” and those flogging that they got it “less wrong than the others.” Herein lays a new problem for the media, politicians and the public: their faith and belief in the accuracy of political polling.

As an industry and/or service, polling has historically had a decent track record. From the time George Gallup did a modest poll to the glory days of telephone polling in the 1980s and 1990s, with 60% to 70% response rates, polling has been constantly evolving. Now, that evolution is faster than ever before. And the biggest challenge, with declining turnout rates, is how to accurately model the voting population. Darrel Bricker sums it up this way:

“B.C. … really was a reflection of a problem emerging in Canada, and we have seen it over a couple of elections now, in which a phenomenon which is usually more of a force in other countries became a real force here. That was being able to predict, not necessarily how people are going to vote, but who was going to vote.”

Canada has now been infected by this international problem.  The American Association for Public Opinion Research (AAPOR) has taken this to task. There have been inquiries elsewhere – for example, after John Major won a majority in the early 1990s amidst the polls indicating a different result – that have resulted in improvements to the local polling ecosystem. Frank Graves considers another dimension of this problem:

“I think is most important is that we more clearly understand the difference between the job of a pollster or market researcher to model their population, and the job of providing a forecast about a political event, or any event, consumer decision I suppose. I think that those two have been conflated in an unhealthy fashion and that we are seeing the media and others use the standard ‘did you get the election call correct’ as a measure of polling quality and I think that that’s routed in a historical context which no longer applies”.

A True Probability Sample is a Thing of the Past

While a quality probability sample is still regarded as the gold standard, the reality is that this is likely no longer feasible. With a majority of polling being done online, and with the growth in IVR (Interactive Voice Response) surveys, many have accepted that non-probability samples are the norm. Jeff Henning weighs in on two points:

 “… People are saying it’s not true probability sampling anymore because of the response [rates], because of the expensive modeling of weight that is necessary to make it [polling] work.

…The battle that many firms are having is that their clients are asking them for more information, cheaper information and so it has led to the use of non-probability methods.”

Henning elaborated on prevalence of online polls:

“We have looked at 250 press releases from March until the first week of May of reporting survey results, 87% were reporting online surveys. I don’t think a single one of those was done with an online probability panel or a company like [GfK] Knowledge Networks.”

With the rise of the software-based polling methods – notably online polling using internet panels of self-selected respondents, as well as IVR systems (typically referred to as “robodials” by the public) – the cost of entry for new methods and firms has never been lower. And driven by the law of large numbers and shockingly low response rates. Gone are the days of excellent response rates to telephone (landline) polls. A 1% response rate on an IVR poll is considered “acceptable.” Gone also are the days of predictably engaging the public to garner their political inclinations.

After the 2012 US national election, Nate Silver, most likely the most famous political statistician at this moment, published an eye-opening analysis of all the polling data collection methodologies and pollster accuracy. The findings were revelatory – fast and cheap methods had larger respondent biases (by supporters of specific political parties) and were less accurate.

Surprisingly, the best-performing poll was the Columbus Dispatch’s old-school mail survey. Henning notes that this was address-based sampling (“a true probability sample”). Overall, live telephone operator and internet panel polls performed significantly better than robodials. These methods were better at establishing a more population-representative sample that captured the diversity of opinion and voting behaviour. However, they are also significantly more expensive than the cheap-to-operate, large sample, conducted-overnight robodials. Clearly there is a trade-off here.

The consensus is that, while feasible, it is cost-prohibitive at this point. While there is some dialogue with mixed mode methods, including cell phone samples, to reflect what a contactable live caller option may be, we are challenged to work with what we have. The evolution of more methods – and in this case the evolution of more digital methods, including online and now smartphone-based – means that response rates will continue to remain low. So, the challenge of attaining a true probability sample is never going to go away.

Structured, Targeted and Regional Samples

One thing that was evident in the recent BC and Alberta elections was the issue of the balance and size of samples.

Keith Brownsey remains critical of the notion of sample size. Alberta and BC are regionally diverse, and in his opinion pollsters have been using sample sizes that are not adequate. He notes that:

“A regionally diverse place between Vancouver Island, the Lower Mainland, the Okanagan and the North, even the North Coast, are very very diverse, and with a small sample from those regions you can’t really get a sense of voting intentions.”

I agree with Keith, as it is my observation with online samples that they do tend to be better within urban environments. Thus, when we are seeing great diversity within our cities, and more homogeneity by region, the onus is upon pollsters to ensure that we are able to get large enough samples to provide some better regional perspective. And, more importantly, in critical swing ridings.

In all elections there is always an inherent skew in voter turnout patterns as a result of regionality. We have seen parties leverage this to their advantage – for example, the Conservative Party of Canada targeting rural and suburban Ontario. With regionality a dimension that needs to be grasped, why do we focus strictly on the horse race? Especially when it is about understanding which seats are in play. Nate Silver states that he is luckiest analyst in the world, as almost 90% of the vote in the US is already decided, so he just has to focus on the 10%. So, his interest is highly focused; he is seeking to understand the dynamics which are shifting the 10% of the known voters.

Media & The Business of Polling

Political polling is traditionally a test of accuracy of a public opinion research firm. It is considered a loss leader (typically below cost), with the expectation that an accurate poll result would build a firm’s reputation and attract new and more profitable business. Over the last decade, this premise has changed dramatically. Mike Mokrzycki reflects upon three salient points: Gallup’s poor performance in the 2012 National Election, emerging firms, and oversight.

The Gallup organization for decades “gave away” its polling and I think I read in “Business Week” the other day about how in essence the public polling is a loss leader for an organization like Gallup. We see this for others as well, where they get a tremendous amount of publicity for the polling and ideally its good publicity, and that helps bring them business in other areas. Gallup, for instance, has a huge management consulting practice. Just because a company pays for polling on its own and gives it away for free, doesn’t automatically make it bad. However, I do see cases where companies that we haven’t even heard of before will all of a sudden appear on the scene and do a bunch of political polling and it gets reported and often it will be reported without anybody taking very much of a look at the underlying methodology. There has been at least one case about 4 or 5 years ago where AAPOR issued a public sanction standards complaint against a company that just refused to release any details about its methodology and the company subsequently – it was a PR firm that started doing polling and they don’t do polling any more.”

There is that old saying – “Fast, cheap and good. Pick any two” – which is truly applicable here. While corporations typically choose a combination of fast or cheap with good, media outlets have opted for fast and cheap. The business model of polls and the media has evolved. Media are currently either cash-strapped or losing money, and thus, in most cases, either do not pay for political polling or pay for access to polls already conducted. Jeff Henning sheds a light on the current state of journalism and polling:

“I know reporters who are freelancers who are paid on page views. They see a word count, on price per word – which as it was hasn’t gone up over the last 10 years – now they are getting paid on page count, so they have got to write multiple articles – write the article and get on to the next one. They want to do as good a job as they can, but they are in a hurry and some content is better than none and an online panel survey is better than no survey, so they will run with it.”

I have colleagues who said they don’t conduct polls for the media unless they get paid. Well, there are a lot less of their polls in the papers now. On his firm’s diminished presence of its political polling, Frank Graves adds:

“We had long standing relationships that went on over a decade with larger players like the CBC and La Presse and Radio Canada, Toronto Star – they have all evaporated in this climate and I don’t really think any meaningful relationships … and they were properly resourced, we didn’t get a lot of money though, but enough to do a good job.”

During the last Alberta provincial election, a regional newspaper approached my firm to conduct a poll. We provided a quote, to which they responded by asking if we could do it for free, as “it would be good for your reputation.” We thanked them for the offer and declined.

Pundits play favourites. There is the additional dimension of politicos and the media’s obsession with the “horse race.” Many column inches are taken up with the analysis of poll results and insights from pollsters (some of you may include this article in that category as well). While these stories do capture the pulse of an election, they do not take into account the overall election ecosystem and the body politic.

It is our sense that there is an educational component that really needs to be undertaken with the media to question the polls more vigorously. Graves states:

“The statistical fluency, methodological literacy, of the media today is appallingly low. That may be typical of a lot of areas where newsrooms have been cut back and we don’t have the same substance of experts.”

Instead of holding to their addiction to publishing poll results and moving on to the next poll, there could be more attention paid to working with them, to understand more about the method and what is going on with the data. This could yield deeper insights as to where the voter population is at, and also about how a poll is framed to gather its input. Too many times on Twitter we see people taking swipes at polling methodologies based on a weak understanding of how to develop a sample frame. We need to do a better job of educating these people, as they are highly influential within this sphere. This is something we have to work collaboratively with media, to ensure that there is better understanding among some of these advocates and influencers within the population, especially during political campaigns.

* * *

Next up: Part Two: The horse-race ignores context and nuance

No Comments Yet.

Leave a Reply

Your email address will not be published. Required fields are marked *