5 minute read

I look ahead a few days here, to 10pm on UK General Election day. Polling stations will have just closed, and major broadcasters (BBC, ITV and Sky) will simultaneously announce the findings of their jointly-commissioned exit poll — the headline being always the predicted number of seats for the (predicted) largest party in the newly elected House of Commons.

The exit poll is by now a big part of election day/night. The expense of it is justified by the fact that the broadcasters get their alltime largest current-affairs viewing, listening and website-visiting figures during the first couple of hours after polls close on a General Election night — but in that specific couple of hours, almost all of the votes remain uncounted still. So the broadcasters need something for TV/radio/web commentators and on-air politicians to talk about in that first couple of hours; and the exit poll is a major part of that.

The world’s financial markets take notice of the exit poll too — in a big way, as evidenced by the substantial movements usually seen moments after 10pm on election day in currency rates and other markets.

But how accurate is the exit poll?

The answer since 2005 has mostly been: very accurate indeed! The 2005 General Election saw the full-scale introduction of a completely new set of methods for designing and analysing a UK exit poll — methods that had been tested first by the BBC at the 2001 election, and found to work so well that they were adopted jointly by BBC and ITV for 2005 (with Sky News joining to make a 3-way consortium by 2010). In 2005 Labour’s reduced majority of 66 seats — which was surprisingly low to commentators who had all seen pre-election polls predicting a majority of over 100 seats — was predicted exactly by the exit poll. And then the same happened in 2010: the exit-poll prediction of 307 seats for the Conservatives, still some way short of an overall majority, turned out by the next day to be exactly correct. Especially when viewed against the historical backdrop of 1992, which will forever be remembered as the election where the BBC exit poll was quite spectacularly wrong, the “spot on” successes of 2005 and 2010 started to make the new exit-poll methods seem somehow magical!

But there is no magic — and I really can say this with some confidence, as co-inventor of the new methods (while working with John Curtice, for BBC election-night programmes in 1997, 2001 and 2005). The innovative use of statistical modelling is what transformed exit polling at UK general elections, from a rather hit-and-hope exercise (in the 1990s and earlier) to an activity whose on-the-night predictions are now much more likely to be fairly accurate. Still, any exact prediction of seats won by the largest party, such as seen in both 2005 and 2010, owes as much to luck as it does to sound statistical thinking. There is nothing in the new methods that guarantees such freakish accuracy! Indeed, even getting a prediction error as small as 4 seats — as seen at the most recent General Election in 2017 — has to be regarded as extraordinarily accurate.

More typically the exit poll ought to be expected to predict with an error in roughly the 5–15 seats range (for the main parties). Sometimes the error will be smaller than that (as seen in 2005, 2010 and 2017); and occasionally it might be larger.

For the full story of how well the exit poll has performed at successive UK general elections, and lots of background material: see the online exit poll explainer.

(And for a bit more insight into the history of my own involvement in the exit poll, see this recent Twitter thread.)

But the main point here is this: While better methodology has radically improved the chances of an accurate prediction from the exit poll at a UK General Election, the super-accurate predictions seen in 2005, 2010 and even 2017 were unwarrantably accurate. Such an astounding level of accuracy is not guaranteed by the statistical methods used, and it definitely should not be expected every time!


Update, 14 December 2019 In the text above, written before the election on 12 December, I wrote:

...the exit poll ought to be expected to predict with an error in roughly the 5–15 seats range (for the main parties). Sometimes the error will be smaller than that (as seen in 2005, 2010 and 2017)...

The 2019 election saw, for the Conservative party total, an exit-poll error of just 3 seats; and so in the statement quoted above we could now say instead “as seen in 2005, 2010, 2017 and 2019”.

I also wrote:

...the super-accurate predictions seen in 2005, 2010 and even 2017 were unwarrantably accurate...

and that can now be amended to “the super-accurate predictions seen in 2005, 2010, 2017 and 2019 were unwarrantably accurate”. I still do believe this to be the case.


© David Firth, December 2019

To cite this entry: Firth, D (2019). Why we should trust the exit poll — but not too much! Weblog entry at URL https://DavidFirth.github.io/blog/2019/12/09/why-we-should-trust-the-exit-poll-but-not-too-much/