Before the night of the Australian Federal election was ever over, questions over the result were being asked. How could polls, conducted so meticulously every week by numerous sources fail to predict the end result? And so began the public crucifixion of the polling agencies who had conducted the polling. While the internal reviews are likely to continue, a number of hypotheses on the matter are coming to the surface. Was it a representative enough sample, was the sample size too small, was phone polling the correct methodology? These are all valid questions and being in the industry, ones that I myself have been asked. However, it was another point on the matter that drew my attention.
One of the pollsters noted they had received results which would have correctly predicted the Coalition victory but decided to supress them because they weren’t in line with information others were reporting. It’s human nature to want to fit in – no one wanted to be the person who turned up in school uniform on a free-dress day for fear of being persecuted and laughed at. But for a polling agency to withhold their own results just because they didn’t fit is a worrying sign for an organisation that needs confidence in their methodology and integrity of the data.
It’s a tough job, especially when the margins of error are so tight in a contest which generally sees only a few percentage points separate the victors from the losers. But what set this year apart was the weight of a year or more of polling data which had continuously reinforced the belief that there was going to be a clear winner. Anything contrary to this data would have caused ripples but would likely have been discredited. Without the ability to actually prove who was right or wrong until the actual election date, it’s understandable that no-one wants be the odd one out.
As an agency that works closely with data, this type of scenario does occur for us as well. Where I believe things fell down for the tall poppy pollster was their lack of deeper investigation or review of their data. It’s something that we work very hard at doing at The Interpreters. We dig. If something doesn’t look right, we try to understand the causes or reasons why results may look out of the ordinary. We hypothesise and review, we test and ensure that we have exhausted all possible reasons. Then we have the strength in our convictions to report what we find. Unfortunately for our pollster, they towed the line rather than standing out. If they had dug deeper into the data, conducted other research to validate their findings and stuck by their results, they could have been praised and celebrated for their efforts rather than now defending their actions alongside the whole polling industry.
By Chris Binney