Nate Silver, Donald Trump and the polls (AKA: question data, doubt certainty)

by Stephen Tall on November 9, 2016

I’m shocked, stunned and just a little bit scared. So I’m not going even to attempt to write something coherent about President-elect Trump.

While Brexit made things uncertain for the UK, the US election has made things uncertain for the world. Maybe “only” for four years. But that’s a heck of a long time.

So a brief something I can be emotionally detached about: pollsters and forecasters.

I don’t think I’m alone in having spent much of the past couple of months refreshing Nate Silver’s 538.com every couple of hours to check his latest reading of the runes. Now Nate has taken a lot of stick over the past few weeks for sticking stubbornly to his model which showed Clinton’s chances significantly lower than others.

In a story which will come to rival the Chicago Daily Tribune’s “Dewey beats Truman”, the Huffington Post’s Ryan Grim took him head-on: ‘Nate Silver is unskewing polls — all of them. And he’s panicking the world‘.

I doubt Nate’s feeling cheery about the outcome but he’s still got the last laugh. Because what he consistently argued turned out to be scarily prescient: polling errors are pretty common and this contest, because of its high number of undecided voters and unconventional Republican candidate, was extremely uncertain. Here’s how he summed it up:

As I write, it looks like Clinton will secure a narrow win in the popular vote, but, having lost the battleground states, is defeated in the electoral college. The exact scenario 538.com posted a few weeks’ ago: ‘How Trump Could Win The White House While Losing The Popular Vote‘.

It took some guts for Nate Silver to adhere to the model and while pollsters and forecasters are taking a half-deserved bashing (the national polls were close; the state polls not) it feels right to give him some kudos.

Final point: I like data. I find it fascinating in politics. We also use it intensely in my day job at the Education Endowment Foundation (generating evidence from randomised controlled trials in education). But it’s also right to be cautious, even sceptical, about data. Don’t trust it blindly, but question it intelligently. One trial in education doesn’t prove (or disprove) anything. Look with an open mind at the weight of existing evidence and then apply some critical, professional judgement when interpreting its findings. Few things are certain: “everything works somewhere and nothing works everywhere”, as Dylan Wiliam is fond of noting.

And anyway, uncertainty feels like the best bet right now. Otherwise all I’ve got is an inevitable dread of the next four years.