For those of you who do not follow UK news, there was an election last week in the UK and the Conservative Party managed to squeak a small majority of the seats with 37% of the votes. This has caused a big fuss and makes the market research industry look very bad since the prediction (based on many, many polls) was for the votes to be split 34% to Labour and 34% to Conservatives – which would have left Labour as the largest party, and they probably would have formed a coalition or minority government.
Some countries already ban polls in the run up to the election and there are broadly two arguments that people who want to ban polls put forward:
- Polls can encourage people to vote for a party or candidate who is not their first choice – especially in countries where the voting system is not proportional. There is also concern that polls encourage copying behaviour, as opposed to considered decision making.
- If the polls are wrong, people may vote for a party that is not their first choice on the basis of bad information, distorting the election result.
The first argument about polls is a philosophical one. The counter-argument is that if information is available why should the electorate be denied it? This response has traditionally been the position of the research industry, i.e. that giving electors more information is a good thing, even if those electors use it to change their vote.
The second argument, about the danger caused by inaccurate polls is, in my opinion, more of a worry. In the case of the UK, the Conservative Party and parts of the media ran a campaign for the last couple of weeks of the election saying that the UK was at risk because Labour and the SNP were very likely to be the next Government – indeed the Daily Mail told its readers this was the greatest threat since the Abdication Crisis (which was in 1936, so they were saying that a Labour/SNP government was a bigger threat to the UK than Hitler and the Second World War). However, this campaign was only credible because the election polls were wrong. The Tories were ahead, Labour was going to do badly, so, it should have been impossible for the Tories and the media to run a scare campaign. The only reason the scare campaign was possible was the bogus nature of the election polls from all the leading pollsters (including both the internet and RDD CATI pollsters).
To some extent it appears the reason that the UK has a Conservative Government instead of the ‘no overall control’ that the electors appear to have wanted was because the opinion pollsters facilitated a scare. It would appear that errors in the polling predictions changed the result – this is part of what the review needs to check.
If polls can’t be trusted to be accurate and if inaccurate polls can change the election result, should they be banned?
Personally, I am reluctant to ban things in general, it is a slippery slope. Perhaps the solution is to have a penalty for inaccuracy (because the UK will certainly now have a penalty to pay). Perhaps we should have a rule that if the outcome, in terms of seats won, is outside of the pollsters predicted margin of error they have to pay a fine. For example, if at the last election pollsters had known that to get the result as badly wrong as they did would have cost them, say, £1million, they might have widened their margin of error on their predictions?
Postscript, did the pollsters get it wrong?
Most of the pollsters have agreed they screwed up big time and there is a review going on. However, a handful are trying to dodge the issue by saying they forecast 34%:34% and the result of 37%:31% is within their plus and minus 3%. This is bogus, entirely bogus. First, there were multiple polls, the total sample size was many thousands, the sampling error (if anyone were crazy enough to believe it was relevant) would have been less than 1%. A single pollster can’t now pull their poll out from the rest and out from their earlier ones and retrofit +/- 3%.
Secondly, even if we were to allow the Conservative estimate to vary by +/- 3% from the 34% estimate, we can’t apply the same number to a second connected measure. If there were only two parties, then an estimate of 50%, means that there is a 1-in-20 chance that Party A is at 47% or 53%, which means there is a simultaneous chance Party B is at 53% or 47%. However, with multiple parties in a pick one from n test, you can’t add and take away sets of 3%. The cleanest way to look at the forecast is that 34%:34% is a forecast of 0 difference between the two. We can test what the probability is that the difference is 1% or 2%, and we can test the probability that party A is ahead of party B by 1% or 2%. On those test moving from 0% difference or a 1% lead in the case of one pollster to a 6% lead is not within any sensible margin or error.
So, yes the pollster got it wrong. The estimates of Labour and Conservative were almost identical for those companies using non-probability internet samples and those using probability sample phone samples. In contrast the Exit Poll (conducted with 22,000 people across 144 polling locations, with people who had voted, using secret voting (dummy papers in a dummy box) was almost correct.