NZSM Online

Get TurboNote+ desktop sticky notes

Interclue makes your browsing smarter, faster, more informative

SciTech Daily Review

Webcentre Ltd: Web solutions, Smart software, Quality graphics

Feature

No Confidence

Polls can push a prime minister from power, but New Zealand survey techniques offer as much predictive power as examining chicken entrails.

By Miles Maxted

In the old days, people used to look at chicken giblets or the shapes of clouds for news of what the future might bring. With an election coming up, we have an interminable number of polls and surveys being used to predict our future.

Politicians and pundits spend millions to watch their fortunes change overnight as surveys report fractional swings. Careers are made and broken on the dubious claims of poll results. Look closely at the naked data, however, and you find that it is riddled with four very serious inconsistencies -- non-random selection, "birthday" biases, re-weighting and non-responses.

Non-Random Selection

Pollsters claim to use random sampling, but often a cheaper alternative is used, such as quota sampling. In this, the interviewer is given a randomly chosen cluster of houses and told to fill a pre-set quota of people by age and sex from whoever happens to be at home. It is later claimed that this data has the plus-or-minus 3% error of high-quality probability sampling.

However, people chosen to fill a quota are not selected at random, no matter how their house was chosen. Claiming a random error for quota sampling is at best an act of faith, not a fact of mathematics. At worst, the surveyors are lying to their clients and sometimes to themselves. Such quota samples may be useful for testing ideas or products, but they are not to be trusted in estimating what the population thinks or needs.

In the 1950s, only housewives were surveyed. It was easy to select houses at random to get a probability sample of housewives -- by definition, there was only one housewife per household. Life was good for surveyors in those days because sampling was simple.

In the 1960s, surveyors were increasingly asked to interview all people aged 10 and over. They kept on selecting houses at random and then randomly chose one occupant. The question "Whose birthday falls next?" is still used by interviewers to make a simple random choice. In addition, it doesn't require expensive statistical training for fieldworkers.

The "Birthday Bias"

Many surveyors still believe today that this process does give each person an equal chance of being in a population sample. However, Table I shows the hole in this argument. The sample percentages should reflect those of the greater population, but they don't.

In a household of six people, an individual has one chance in six of being selected to answer the poll. A person living alone will always be chosen to respond. This clearly destroys the essential idea of probability sampling. Note that the biases based on birthdays can be three or four times larger than the standard 3% error for a true random sample of 1,000.

The consequences of this are not immediately obvious, but they can be dramatic. Survey estimates of the ages and sexes of New Zealanders become hopelessly inaccurate. So does every other measure associated in any way with the numbers of people in the home. Unfortunately, age, sex and household numbers are the very same basic demographic variables that we all use in business or political planning.

Consider survey reporting of media audiences. Some magazines, news-papers and radio stations are favoured by young people. Young people tend to live in homes with lots of occupants. Use birthdays to select your group and you eliminate a huge proportion of young people from the survey. Consequently, youth-oriented media are reported as having much smaller audiences than they really have.

The reverse happens for those living by themselves or in pairs. They gain a disproportionate influence merely by living in smaller households.

In cities like Christchurch, this actually inverts the real batting order of competing media on occasion. Obviously, the $1 billion spent annually on advertising in New Zealand is seriously misdirected by such sampling.

There are even more serious effects, however. Take the apparent decline in New Zealand evening paper readership. It may have been created and realised by nothing more than bad sampling tech-niques. But it's too late to tell that to the Auckland Sun, which has passed from view after failing to profit from a distorted view of the market.

Re-weighting

Good statisticians report their doubts, noting any biases and giving a best estimate of what an unbiased sample might have shown.

Unfortunately, this is not common practice amongst surveyors. They tend to re-process the data to correct any obvious failings. This reprocessing is sometimes referred to as "re-weighting" in an attempt to clothe it respectably in borrowed terminology.

Table II demonstrates the significant inaccuracies encountered in re-weighting. Here are the results of averaging estimates from 100 samples, each of 150 people, from a test population -- the reported means should provide a very good match with the population mean. The birthday-biased samples obviously do not. As mentioned, they grossly over-estimate the older population and under-estimate the younger. Re-weight them to correct the age and sex biases, and they still don't mirror the population in any useful way. A true random sample would.

Clearly, attempting to correct birthday-biased samples by age/sex weights has as much practical value as does embalming a corpse in the hope of its resurrection.

Non-Response

The problem of non-responses is the worst of the lot. A surveyor may contract to interview 1,000 people at random, but finds that one out of two people doesn't want to co-operate. To counter this, a sample of 2,000 is used to provide the 1,000 respondents needed. The maximum error of plus-or-minus 3% is worked out assuming a 1,000-strong random sample. Right?

Wrong! The research ignores the 1,000 non-respondents. It glosses over the very real possibility that they refused to respond for some reason to do with the aim of the survey. Look in a survey report and you'll be very lucky to find any mention of the non-response rate.

Even worse, the calculation itself is entirely wrong. The error limits should be calculated on the sample of 2,000, not just the 1,000 who answered. When calculated properly, the real error is actually plus or minus 26.5% -- nearly nine times the error usually claimed.

Cumulative Consequences

These are only four of the many sources of survey bias. Each shows that many current claims of scientific precision in sample surveying are no more than idle puffery. This is not to say that all sampling techniques are plagued by these problems. Sampling theory is secure -- it 's the current practices that are a major worry.

Use these dubious practices and you have a survey technique which is as accurate as -- and a lot more expensive than -- examining the scattered entrails of a slaughtered chicken. If you really want to be certain about the outcome of the election, wait until the real votes have been counted. W

Miles Maxted founded the National Research Bureau and now operates an Auckland marketing consultancy.

Poll-Driven Politics

Geoffrey Palmer probably rues the day his party took up polling as a means of gauging voter support. He wouldn't be the only politician. Jim McLay also fell victim to poor poll ratings and others in the past have come under threat from perceived public opinion.

Election polling in New Zealand didn't get established until the early 70s, when major newspapers began publishing the results of independent research firms. Since then they have become an institution.

In the run-up to the 1978 election, NRB polls showed Social Credit beating Labour to second. The results got banner headlines and prompted a short-lived attempt to unseat Bill Rowling. A mathematical polling model developed at Victoria University predicted a 20-seat win for Labour. Opinion polls suggested a 6-seat majority for National. When it came to election night, National gained 11 seats over Labour.

In 1981, much the same thing happened. Social Credit again led Labour in the polls for some time, peaking at 31% support. Labour still managed to get 41 more seats than their supposed competitors. In 1984, the lead in the poll stakes seesawed backwards and forwards between Labour and National, changing monthly. The last election had fewer swings, as the influence of the minor parties had dropped right away.

It is not just market research companies that are involved in polling New Zealand these days. One survey conducted in Helen Clark's electorate suggested that the Minister of Health was in grave danger of losing her seat. The poll was sponsored by a tobacco company, which may have had something to do with the results...

Miles Maxted founded the National Research Bureau and now operates an Auckland marketing consultancy.