NZSM Online

Get TurboNote+ desktop sticky notes

Interclue makes your browsing smarter, faster, more informative

SciTech Daily Review

Webcentre Ltd: Web solutions, Smart software, Quality graphics

Feature

Pulling Polls Apart

If New Zealand pollsters had learned a thing or two from their British counterparts, they might have been less embarrassed on election night.

by Malcolm Wright

"Bugger the pollsters" was the last comment made by Jim Bolger in his election night address. Bolger was clearly frustrated by the failure of the electorate to deliver the National victory that the polls predicted. Nor was he alone in his surprise -- most commentators had expected that National's lead in the polls would be translated into an unequivocal election night majority. Instead, National's vote was within one percent of Labour's vote, and the election night result was a hung parliament with 49 seats to National, 46 to Labour and 4 seats to the minor parties.

Why did this surprise result occur? Did the pollsters get it wrong? Were they misinterpreted? Or was there a late swing?

These are not new questions. This problem was first encountered with the British 1992 General Election -- the pollsters predicted a Labour victory, only to see the Conservatives returned instead, with their estimation of the gap between Labour and the Conservatives out by 9% on election day -- and has since been seen in Australia as well as New Zealand. These situations are highly embarrassing for companies who offer political polling as a loss-leader to help promote their services. Nobody likes to see their most public product fail.

Before apportioning any blame for this inaccuracy, we would do well to understand what the potential sources of error are in polls, and then recount the lessons learnt from the British experience in 1992.

Sources of Polling Errors

When talking about error in polls, most people immediately think of the "margin of error", which is also known as sampling error. This measures the error involved in taking a simple random sample from a population. Saying that a poll has a margin of error of 3% merely means that 19 times out of 20 (for a "95% confidence interval") the error arising from taking a simple random sample instead of a census of the whole population will be 3% or less. Face-to-face surveys usually use cluster sampling, which has a greater error than simple random sampling, although this higher margin of error is not always reported.

Non-sampling error is another major source of error that is not reported by pollsters or the media. Some of the sources of non-sampling error include refusals to participate, question ordering and phrasing that encourages certain responses, interviewers who unconsciously encourage certain responses, and respondents who provide socially desirable rather than truthful answers. Non-sampling error is very hard to measure, although for any one research company it is usually consistent from one poll to the next.

Sampling frame bias can also distort the results. Bias occurs when not all members of the population have the same chance to be selected in the sample. For example, if the telephone book is used as a sampling frame, the sample will be biased against those without a telephone, or those who have recently changed their telephone number, such as students.

Differing sampling frame bias and non-sampling errors lead to the variations observed between, say, Heylen polls and Gallup polls taken around the same time. However, if research companies use consistent methodologies, these errors should be about the same size for each poll. Although the snapshot taken by a single poll may be out by more than the margin of error, keeping the sampling bias and non-sampling errors consistent allows trends to be accurately recorded by successive polls.

The UK Experience

The "margin of error" was speedily rejected as the cause of the inaccuracies in the UK polls. It might explain the discrepancies in any one poll, yet all four final polls made the same prediction, with an unprecedented degree of agreement. Ivor Crewe, in the Journal of the Market Research Society, estimated that the chances of getting this result due to sampling error alone was less than 1 in 160,000. Similarly, in New Zealand the polls were generally consistent in putting National well ahead of Labour, and sampling error appears to be an unlikely explanation for the result.

In Public Opinion Quarterly, a group led by Roger Jowell examined the results of three post-election surveys aimed at explaining the discrepancy between the British polls and the election result. He identified four effects which explained around half of the discrepancy between the predicted result and the actual Conservative lead over Labour.

First, there was some vote switching between the three major parties. The net result was a small swing to the Conservatives, which would have increased their lead by a maximum of only 1.4%. This appears to cast doubt on any claim that the polls were an accurate "snapshot" at the time they were taken, but were undermined by a late swing to the Conservatives. Similar claims in New Zealand -- that the polls were undermined by a late swing -- must be treated with some caution.

Second, between one and eight percent of respondents were originally "don't knows". When they were resurveyed after the election, it was found that they supported the Conservatives more than they did Labour. The net effect was to understate the Conservative lead by a maximum of 0.4%. New Zealand has many more "don't knows", and it is likely that their effect would be proportionally greater.

Third, a substantial majority of "won't says" actually voted Conservative. This appeared to contribute up to 2.0% to the understatement of the Conservatives' lead. It has been suggested that this may have been because it was socially undesirable to be seen to be supporting the Conservatives -- a "shame" factor. The idea of a "shame" factor is purely speculation at this stage, but it does raise the question of whether some New Zealanders make "socially desirable" responses to surveys instead of stating their real voting intentions, or perhaps say "don't know" rather than divulge their political views.

Finally, more Conservatives turned out to vote than Labour supporters. This differential turnout contributed up to 1.2% of the understatement of the Conservatives' lead. This may have also happened in New Zealand; for example, post-election press reports stated that hundreds of National supporters in Khandallah simply did not get out and vote.

These four factors account for around 5% of the understatement of the 9% Conservative lead. Jowell suggests that the remainder is a problem of sample bias arising from the sampling unit selection and refusals to participate in the street intercept procedures used in the UK. There is some good, if not conclusive, evidence that he is correct.

Lessons for Kiwi Pollsters

Fortunately, New Zealand pollsters use more reliable sampling methods. These include in-home interviewing and random-digit dialling telephone interviewing, with multiple attempts to contact unavailable respondents. The use of these techniques means that sampling bias is unlikely to be as great a problem as it was in the United Kingdom.

Nevertheless, pollsters are still subject to non-sampling errors in the event of poor questionnaire design, poorly trained interviewers, or failure to make the required callbacks on unavailable respondents. Respondent fatigue also appears to be an increasing problem, as people may refuse to participate in yet another of the many political surveys, or may stop cooperating well before the end of a lengthy questionnaire.

It would be sensible for the market research companies to resurvey their survey participants to determine the both the net effect of these non-sampling errors, and the effect of the factors involved in the UK situation.

In particular, the British evidence shows that we should not assume that "don't knows" and "won't says" will vote in the same proportions as those who have declared a voting intention. Unfortunately, this is the way in which every poll was reported during New Zealand's election campaign -- undecideds were allocated on the basis of declared party support.

Perhaps most surprising of all is the expectation that a single question at the end of a long survey could accurately predict anything, let alone something as complex as voting behaviour. This is essentially quick and dirty research, which can only give a quick and dirty result.

More sophisticated approaches are available which significantly reduce the "don't knows" and "won't says", while also allowing for the effects of late switchers. Janet Hoek of Massey University, together with The Dominion, trialled probability-based methods and secret ballots in three marginal seats. The result was a significant reduction in the undecideds over the traditional method of polling, together with an accurate prediction of the winning candidate in each of the three seats.

Those who really want to predict election results should look towards this work: identify the marginals, use improved methods for predicting voting behaviour and reducing the numbers of "don't knows", and predict the government majority based on the results in these marginals.

This approach will, of course, cost substantially more than existing polls. It remains a moot question whether someone will be willing to front up with the money to do the job properly. Undoubtedly, Bolger hopes that someone will.

Malcolm Wright is a lecturer in marketing at Massey University.