8th May 2017 Poll Results

The local council election triumph for the Conservatives has given pollsters much food for thought.

On one hand, the probable Conservative landslide is now more securely etched onto the electoral canvass, confirming the general picture that polls have been showing since just after the Brexit referendum. However, in recognising that the Tory leads like this – a 22-pointer which is an outright record in the Guardian/ICM series dating back to 1983 – are now underpinned with real votes (even if there are difficulties translating outcomes from one election to polls measuring another), we must also reflect on the fact that the Projected National Share (PNS) from the council elections predicts a closer General Election race.

First things first. This poll is remarkable, and historic. It puts the Conservatives on 49%, and Labour on 27%, implying that 22-point lead. Not only is the lead an outright record for any ICM poll, but the Conservative share is a record in the Guardian/ICM series. It is only beaten by a 49.5% share that we recorded for the Sunday Mirror in May 1983, when ICM was called Marplan. Also noteworthy is the continued decline of UKIP, now measured at 6%, its lowest share from ICM since January 2013.

The top line figures are:

Conservative 49%

Labour 27%

Lib Dem 9%

UKIP 6%

SNP 4%

Green 3%

PC *%

Other 1%

So how should we reflect on a 22-point Tory lead when the PNS suggests ‘only’ an 11-point lead (Professor John Curtice estimated the PNS at Con 38%, Lab 27%, LD 18% UKIP 5%). First of all there’s the long established recommendation to look at the shares not the lead. Every point off Labour snaffled by the Tories equates to a 2-point move in the lead, therein making a nice story but somewhat exaggerating the underlying positions.

Secondly – and this is not meant to be a positive spin story – we can be moderately pleased that in this poll, we exactly match the Labour share, and it’s almost smack on UKIP’s. The story of polls for just about forever has been the over-statement of Labour’s position, so if it’s the case that we’ve solved that riddle, well, it’s a good start. But the jury is very much still out on that and only the General Election will vindicate us, or not.

Clearly, if we are to take the PNS as the best evidence available of the current state of play, we’re over-stating the Tories and seriously under-representing the Liberal Democrats. This is a whole new experience for the polling profession, well versed as we are in pretty much doing the opposite. With the last two years spent on the development of polling methods specifically devised with the intention of confronting the Labour problem, the question must be considered that we’ve gone too far the other way.

In the last weeks, we’ve been paying close attention to the individual value of each of our post data collection methodological techniques, to see how far each is actually pushing the vote shares in different directions compared to raw data. Much more on this will be revealed at a later date, but the evidence so far is that the techniques are working in exactly in the ways, and with the relative strengths (for the main two parties) we were looking for.

Indeed, although this is an exercise in the absurd, if we had applied these techniques to our final prediction poll before the 2015 election, instead of predicting a 1-point Labour win as we did, we would have predicted a much more accurate election outcome.

But of course we have sought to correct an error that has affected the main two parties, and now we live under significantly different electoral conditions. The performance of the Liberal Democrats in the council elections – at least in terms of vote shares rather than seats – implies we have a new, but real problem with them if PNS is correct. That said, my view prior to 2015 was that we were over-stating the extent of their fall, but in the event we were largely not.

Some readers may feel my pain.

It would be rash for a pollster to panic themselves into methodological revision at this point. Too often of late, we have seen last-minute methods moves that worsened predictive performance, and brought associated accusations of herding. It would be wrong for any pollster with their reputation on the line to rule out methodological tweaks, especially if it’s obvious that final poll samples are clearly out of kilter, but better to trust in the methodology than to rush into error.

Comments are closed