What’s wrong with surveys?

September 14, 2014

Political surveys in Pakistan continue to divide opinions, in addition to raising legitimate questions of credibility, validity and impartiality

What’s wrong with surveys?

Political surveys in Pakistan, focused on the ruling elite and state institutions’ performance as well as the popularity of civilian and military leadership, have attracted a lot of attention recently. Although the government and the opposition have been quick to give a spin to the findings to suit their own interests, independent observers have also raised questions over the timing, methodology and interpretation of findings in these surveys. It must be said that not all of this criticism is without merit.

Take the recent PILDAT survey on "Democracy, National Leaders and General Elections" as an example, which points to 67 per cent of the Pakistani public (showing a substantial increase of 13 per cent) having trust in the incumbent democratic setup. All this makes a very encouraging read, but with a caveat. Almost all surveys have some inherent limitations, which need to be acknowledged when making the findings public in order to enhance credibility. However, the PILDAT survey (available on their website) stops short of doing that. Let’s look at some of these glaring omissions from a layman’s perspective.

To begin with, the survey mentions that there was a sample of 3065 people, who responded to the survey. However, there is no mention of the break-up of respondents in different provinces or their rural-urban divide. In addition, no details of the sampling methodology have been given either.

To establish the credibility of a truly representative national survey, the very first criterion is having a random sample. True randomisation is only possible when each one of the 20 million Pakistanis, irrespective of their location, have an equal chance of being part of the survey. To play the devil’s advocate, such an ideal situation is quite difficult to achieve and there are other strategies (like stratification or cluster sampling) that can be employed to overcome this limitation. Was this done in case of the PILDAT survey in question and what procedure was applied to ensure that? If the question is not in the affirmative, then the sample was at best a convenient sample, which brings in an element of bias and cannot be trusted for the validity of its findings.

Another problem with sampling is in the break-up of respondents. Punjab constitutes more than half of Pakistan. If the surveyors selected respondents based on the population of each province (where people essentially have a particular political leaning), the results are more likely to be skewed heavily one way or the other. To put it more simply, if a predominant majority of the sample comes from Punjab (where the PML-N is in power), the responses generated are likely to be in their favour, which will tilt the balance of the survey. To overcome this problem, the survey researchers need to include a few exploratory questions in the main questionnaire. In doing so, they essentially ask for political affiliation of respondents.

The media professionals latch on to the numbers without bothering to find out what rests underneath. The ambiguity surrounding these surveys also enables the political parties to infer their own meanings.

In analysing and interpreting the results of a political survey, they need to acknowledge and make this break-up public. In case the data points towards the presence of respondents with a certain political orientation in large numbers within the sample, the validity of data collected needs to be reassessed.

Besides the sampling techniques, the other omissions in the PILDAT survey concern the actual questionnaire and how it was administered. All worthwhile surveys would make the questionnaire a part of the final report, so that the readers could themselves find out what the subjects were responding to.

The wording as well as the ordering of different questions can exert a telling influence on the audience in answering a certain way. All good surveyors remain extra cautious about this possibility in designing the questions. To double check, they put it out for the public to judge by making it a part of their final report. The survey in question, sadly, hasn’t done that. Although the final report by PILDAT is in English, there is no indication as to which language was used in designing and administering the questionnaire. Considering the literacy rate in Pakistan, was there any pretesting done to double-check whether all the questions were easily understandable for the potential respondents and to guard against biased or leading questions?

As far as conducting the survey is concerned, the PILDAT report references that the questionnaires were administered face-to-face. This modus operandi can have a social desirability effect on the respondents. In simpler terms, they can feel obliged to answer in a "politically correct" way with an eye on pleasing the person who is asking questions, particularly if they also have prior familiarity with each other. This problem becomes even more intense in countries like Pakistan where people like to extend courtesy to others.

In doing research, such courtesy (giving invalid answers) can jeopardise the data obtained. If the respondents are not being surveyed in isolation, their presence in groups can influence each other’s answers as well. To minimise this effect, the researchers tend to employ human resource that doesn’t have a prior interaction with the respondents but appear as much like them (in appearance and social status) as possible, so that the subjects don’t feel overwhelmed.

The administrators of the survey need to be provided special training so that they remain vigilant about this aspect during their work. However, the PILDAT report doesn’t identify that any such step was taken or not. Better still, the respondents should be interviewed/surveyed separately, so that they can provide answers that they are not comfortable giving in front of others. If possible, they should also be allowed to complete the survey questionnaire anonymously instead of face-to-face, which can substantially increase the chances of getting genuine answers instead of socially desirable ones.

The said survey presents the final results to the readers as percentage scores for each personality or party to drive home the point as to which of them has a lead over the other. However, statistical analysis of survey data is a lot more than just crunching numbers. The organisers of the survey had to inform their audience what statistical tests had been conducted to come up with the results that they did. The reason being that the numbers as well as the distance between two points of reference appealing to the naked eye may not end up as statistically significant if scientific tests are applied and interpreted properly. The survey does indicate that it has 95 per cent confidence interval. This does not mean that the findings are 95 per cent accurate. Rather it means that the survey will generate the same results 95 per cent of times if conducted with a similar sample and methodology.

Lastly and most importantly, the organisers of the survey ought to make the name of the sponsor clear upfront to ensure that the entire exercise remains aboveboard and credible. PILDAT does mention that they commissioned the survey and it was carried out by Gallup Pakistan, but has no mention whether they provided funding for it as well. In case the sponsor is different than the organisation that commissioned the survey, the terms of reference also need to be made public.

Be it PILDAT or any other organisation that presents its survey findings to the general public in Pakistan, they hardly pay any attention to these basic parameters of conducting a survey. Even if they do make any such effort, relevant details are kept secret for some unknown reason, which is unethical and against the prevalent practice at the international level.

The media professionals, unaware of these finer points, latch on to the numbers thrown at them without bothering to find out what rests underneath. The ambiguity surrounding these fundamental characteristics of a credible survey also enable the political parties to infer their own meanings. Those who find the figures in their favour start drumbeating whereas others are quick to raise eyebrows.

Interestingly, the brains behind these surveys hardly ever make an attempt to quell these objections through the aforesaid points. If they want their effort to be taken seriously, they need to reconsider the existing strategy. Otherwise, such surveys will continue to divide opinions, in addition to raising perfectly legitimate questions of credibility, validity and impartiality.

What’s wrong with surveys?