Numbers, graphics and lot of hot air: Kerala poll predictions have turned into a joke - Firstpost
Powered By:
In Association With:

Numbers, graphics and lot of hot air: Kerala poll predictions have turned into a joke

It's opinion poll time in Kerala.

The state’s largest TV news network, Asianet News came out with its second survey last week followed by Marunadan Malayali, a popular news website. And both have predicted unequivocal victory for the CPM-led Left Democratic Front (LDF).

Early this year, the CPM party channel, Kairali also had come out with a survey predicting a massive victory for the LDF. Given the unusual appetite for news among Malayalis and the profusion of TV channels, newspapers and websites, there could be more such surveys in the pipeline.

Ultimately, all the polls might prove to be right in their "findings" for two reasons — one, Kerala’s electoral history of alternating governments, and two, the deluge of corruption charges against incumbent Oommen Chandy and his men (and one woman) that may make them unpopular. But what will remain unseen will be the "methodology" behind these polls because nobody knows how they arrived at the figures that they present with fancy graphical embellishments.

Let me leave aside the survey by Kairali, because it’s a party propaganda vehicle and hence its results are most likely to be motivated. More over, doing a survey in February for an election in May makes no sense. However, what's astonishing are the studies by Asianet News and Marunadan Malayali.

Representational image. AFP

Representational image. AFP

Asianet News does have a partner in C Fore, a research organisation which apparently had predicted the 2011 elections for the channel with a lot of "accuracy". From available information, Marunadan Malayali seems to have done the study by itself. Whether one partners with a specialist or not, what’s important is how one undertakes a study to find out what’s in the minds of voters. Is there some science or is it simple second guessing?

I have strong doubts about the science until the pollsters show their hand, which they haven’t. Asianet News ran extensive discussions and analyses on their poll, but not once did they say how exactly did they arrive at these numbers, particularly the sampling method and conversion of vote-shares into votes and seats.

In any quantitative analysis, sampling is the key. If you choose the wrong sample, you get wrong answers. In an election-based survey, which tries to record what’s in the mind of all the voters, one has to choose the sample in such a way that it, at least approximately, reads all of them correctly. In statistical terms, this is the same as "equal probability sampling" — a sample in which all voters have equal probability of getting represented.

In a state of diverse demographics, it’s not easy. If there are sampling frames to randomly choose frame, so that all the sample voters selected together represent the general electorate, it’s possible; but where are the sampling frames in our communities? The voters’ lists, or municipal and panchayat record of residents or anything else?

No, because the demographics and political affiliations are so diverse. This is where one has to be extremely careful about sampling. Expert psephologists appeared to have cracked this over the years and have arrived at some dependable forms of sampling so that political affiliations, caste, religion etc among the people are factored in. It’s not simple random sampling, and more importantly what works in Kerala might not work in Punjab.

At a higher level, the same complexity also applies to the constituencies or the segments of constituencies that pollsters select for their surveys. Do they give equal probability to all the other constituencies, wards and booths or do they at least represent the overall demographics? Not easy. One has to statistically analyse past trends, voting patterns, influence of castes and religions, socio-economic profiles, political affiliations and their (constituencies/segments) overall representative value.

Equally important is the model they use for conversion of vote-shares into seats. For instance, how does a party that’s predicted to get 18 per cent votes (BJP) in a state get 3-5 seats, whereas in reality, a party with a single digit and marginal vote-share (Kerala Congress) has been winning in more seats?

All that truthful psephologists can do is to aim for an optimum technique and tell people its limitations before trumpeting its results. Claiming past accuracy is a dishonest short-cut. We have seen how NDTV miserably failed last time although it had claimed many feathers in its cap over the years.

In other words, the pollsters have to tell people how they chose the sample voters and how they converted vote-shares into seats. It’s an ethical pre-requisite in any study that claims to be scientific. Does one accept the data or findings of NSSO without knowing its methodology and limitations? Does any researcher worth his/her salt get to publish without an upfront statement of its methodology and limitations? In scientific sample surveys, the processes of design, testing and modification of survey instruments are critical and transparent.

KeralaBut, Asianet News didn’t tell its viewers its methodology and limitations. All that it would say was how many people it covered, the percentage of possibility of errors and how accurate it had been in the past. Unless one makes a complete disclosure, it’s not science, but quackery. Numbers may turn out to be correct, but that’s possible in any game of probability. It’s easier than throwing the dice and getting it right.

In the case of Marunadan Malayali, the "methodology" is hilarious. According to its website, it chose "random survey" over other methods (whatever it means). How? Just by going out to random places (e.g. shopping malls because Malayalis like air-conditioned places!) and asking questions to people they randomly chose.

Any systematic technique of statistical value? From what they have described, nothing. More laughable is the claim that their survey is more reliable because they covered 25,000 people, which according to them, is the biggest sample so far. What this statement betrays is a lack of basic understanding of how data and statistics work. Sample size means nothing without science.

It’s such a sad irony that in a state with near-total literacy, these surveys are lapped up without any debate on the methodologies and their limitations. Traditionally, journalists predicting results based on visible public mood and some vox populi has been part of acceptable election reportage. Now, the same seem to have been decked up as surveys by some media outfits. Sample surveys require rigorous science and a robust mathematical model. And it also requires a lot of money. The short-cuts are easy — all they require are some numbers, graphics and a lot of hot air.

Comment using Disqus

Show Comments