Prediction

The book can be bought from Amazon
I recently finished reading Nate Silver's book "The Signal and The Noise: Why Most Predictions Fail But Some Don't". Nate Silver is currently somewhat famous because his election predictions - he runs the blog fivethirtyeight for the New York Times - his election predictions were basically spot on. The book, timed to come out shortly before the election (in what I think was a sort of gamble to capitalize on a successful prediction) is about prediction, generally defined. Structured as a sort of collection of case studies, the book talks about all the ways prediction can go wrong, and about what to do to try to get it right. There's no nice one-line answer, but I think he gives some pretty good advice.

The book is pretty non-technical, in spite of a valiant attempt to explain Bayes' theorem using only high school algebra, but that kind of works since one of his main points is that the errors people make in prediction are mostly not mathematical errors like miscounting, miscalculating, or even undersampling. The errors are more likely to be overconfidence, failure to consider unfamiliar but possible scenarios, or ignoring inconvenient evidence. Not to mention the biggest category of problem: predictions where being right isn't even important to the person doing the predicting.

This last problem, predictors with the wrong motivation, is most evident in political pundits, who are motivated to produce surprising or partisan predictions rather than right ones. But I think a more interesting example from the book is his section on weather forecasting. First of all, the US government weather forecasters are doing a pretty good job, and getting better as computing power improves (though human forecasters improve the computer forecasts significantly, he points out, by correcting for known limitations). The values they report are pretty much right - if they claim there's a 10% chance of rain, it rains about 10% of the time. (Incidentally, he doesn't talk much about the right way to check this sort of prediction, which is something I've been thinking about for a while.) But many people get their news from local weather programs or from for-profit weather forecasting sites. These sources all have access to the US government forecasts, and yet their predictions are substantially less accurate. They will, for example, extend predictions further into the future than the computer models are reliable, just to have something to report, even when it's no more accurate (and sometimes less) than a Farmer's Almanac. They also misreport their predictions: rounding near-50% predictions to 60% or 40% to look more knowledgeable, or artificially boosting small chances of rain because they'd rather be wrong by predicting rain when it doesn't than vice versa.

This last kind of misreporting got me thinking, though. I don't think results should be misreported. But the way we react to predictions should depend not just on their likelihood but on the importance of their consequences. If the weather is predicted to have a 1% chance of rain, it's probably not worth cancelling your picnic. But if there's a 1% chance of a terrorist attack at a specific place and time, that's probably worth a lot more effort even though it's not very likely. Which brings me to another chapter I found particularly interesting in the book.

Silver talks about the September 11th 2001 terrorist attacks. In fact he talks to Donald Rumsfeld, who coined the term "unknown unknowns". Possibilities the forecaster simply didn't think of are always going to be a problem for predictions. But Nate Silver does some rough calculations and points out that something like the September 11th attacks should not have been a total surprise. The scale of them - the number of people killed - was unprecedentedly large. But the sizes of terrorist attacks, like the sizes of earthquakes, follow a power-law distribution, and the September 11th attacks fall pretty much right on the line. As with earthquakes (which he talks about in another chapter) this doesn't predict any individual attack's time and place. But an intelligence agency that was watching the numbers should have had somebody keeping an eye out for attacks on this scale, because they could have expected one every decade or two. As for the details of the attacks, well, there have been other analyses of the ways they might have been found out ahead of time. The key fact, though, is that these attacks were part of a well-established distribution of attacks. They don't necessarily signal a new era of global terrorism or even a change in terrorist behaviour.

Nate Silver also argues strongly that to make good predictions it's a tremendous help to understand the underlying mechanisms. Even if it's the weather, where chaos forces you to pour tremendous extra effort in for every extra day of lookahead, or chess, where what limits your predictions is just computation, having an understanding of the rules by which the process works can help. Earthquakes, for example, are basically not predictable, not individually (though you can say there'll probably be a big earthquake in California sometime soon, and you can even predict how often earthquakes of a certain size will hit California, there's no way to tell one day from another), and he suggests that this is because the mechanics - the stresses deep underground, the strength of the rocks - are not accessible to us.

On the other hand, Silver points out the hazard of overfitting - of building a highly elaborate model that fits what is essentially noise. Such a model will fit the existing data quite well, since it was built from that data, but it can be worse than a simpler model when used to predict new data.

Finally, what about the election? How did Silver do so well? His prediction system is simply based on combining many polls. In principle, this should work rather well: people phone up voters, ask them how they're going to vote, and report that. Then on election day, people vote, mostly the way they said they would. How hard can it be? Well, unfortunately, there are myriad ways pollsters can introduce biases and problems, and usually all they report is an uncertainty based on sample size. So Silver's system builds an estimate of the reliability and bias in each poll, taking into account how far in advance of the election it was taken, and combines all the polls taking this into account. I'm not sure exactly how he builds his reliability estimates; past elections, I would guess, and perhaps cross-checking. But in general, combining independent predictions tends to produce better predictions than any individual one, provided you can avoid being carried away by widespread bias. And it seems Silver did that. The book is a decent guide to how.

5 comments:

Popup said...

Interesting!

This has been on my to-read list for some time now. Maybe it's time I got on to it.

It sounds like a companion to Talebs The Black Swan or Dan Gardners Risk - the science and politics of Fear.

Have you read either book?

Unknown said...

Hmm, no I haven't read either. Perhaps it's time to visit the library.

From their descriptions, though, they seem a little more focused on people's reactions, whereas Silver's book is aimed squarely at making accurate predictions. People's reactions and motivations are simply obstacles to this goal, though they're unavoidable obstacles, and he talks about ways to try to make sure the incentives for forecasters lead to accurate predictions. The classic, of course, is markets and betting - a more-accurate forecaster can always make money in the long run by betting against a less-accurate forecaster. In Physics Experiment Land, that is. In reality, we've seen things go badly wrong with markets, and even markets like InTrade intended specifically for forecasting.

mvc said...

anecdote: when my brother worked as a snow plow driver, he was (understandably) an obsessive weather forecast reader. over time, he came to prefer environment canada, saying that commercial forecasters would generally overestimate the probability of good weather three or four days into the future, then gradually downgrade their forecast as a storm approached. apparently (they believe) consumers want more bright sunshine icons in their forecasts, even at the expense of accuracy.

and speaking of following things obsessively, this cartoon summarizes why i was reading nate silver during the last two US presidential elections: http://cdn.pjmedia.com/vodkapundit/files/2012/11/121029_daily-cartoon-silver_p600.jpg

too bad no-one can pull that trick off here (much harder election system to simulate, and far less data available). silver himself did a rather poor job with the last general election in the UK, for example.

Popup said...

I just finished the book. Fascinating indeed!

What I found most interesting was the chapter on global warming. While it's true that both 'sides' of the issue commit statistic felony, it's clear who's backing the better bet. And Silver manages to show it using only a limited amount of handwaving.

mvc said...

also: http://mathbabe.org/2012/12/20/nate-silver-confuses-cause-and-effect-ends-up-defending-corruption/