A review of The Signal And The Noise: Why So Many Predictions Fail — But Some Don’t by Nate Silver.

This book was ok. It wasn’t terrible and it was on a topic that I am quite interested in. However, it just didn’t seem to really do the topic justice. It was a rather mild and sporadic introduction to some of the problems and situations that are encountered when predictions are necessary.

He specifically cautions against hubris and generally his writing reflected that, but then he also sometimes says stuff like "…it is hard to improve on a good method for aggregating polls, like the one I use at [the author’s prediction web site]."

Indeed, his real nugget of first hand wisdom seems to be that taking a poll of polls is better than direct polls. Ok, not terribly shocking. Of course it starts to sound a bit like a pyramid scheme since someone has to do some real work somewhere.

The author is a bit religious about "Bayesian" thinking. I would not argue that Bayes Theorem is useless or even generally ineffective, but as a solution to all of life’s uncertainties, I feel it comes up short. For example, in the US people like to imagine themselves as "innocent until proven guilty" yet if we assign a prior probability of zero to being guilty, then Bayes Theorem will always produce innocence. I’m not saying Bayes Theorem is correctly applied here, but I am saying that there are cases of uncertainty where a firm adherence to it will not be ideal. The author made no such allowance as far as I can tell.

Despite having strong feelings about the futility of "frequentist" thinking, the author does not really explain exactly what it is and why it’s bad. He seems to go into almost ad hominem criticism of R. A. Fisher. To me the idea of arbitrary confidence intervals is about as weird as the idea of sketchy prior probabilities.

His notion of "foxes" and "hedgehogs" might as well be translated as "winners" and "losers" for simplicity. As he points in other parts of the book, it’s easy to see who the foxes must have been in retrospect.

Before reading this book, my explanation for why some predictions succeeded while most don’t was one of two things: luck or cheating. Cheating covers things like insider trading or point shaving. This book did add a couple of possibilities for a mild type of cheating that I hadn’t really considered so fully before. First there is the aforementioned aggregation trick. Noted. And second, fishiness.

The author seems to have done a lot of his prediction training at the poker table. (Hmm, until it got too hard.) He talks about "fish" who are the participants in a poker game who are simply low on the skills necessary to consistently win. If you put yourself in a situation where it’s your predictions against theirs, you stand a good chance.

In addition to poker, the book wanders all over baseball stat geekery. I’m not really a baseball fan so those parts were a little noisy for me.

The author’s folksy introductions to the genuinely impressive list of people he interviewed for the book were a little contrived. Sometimes I felt like his interviews had little signal but were supposed to just impress us with the name dropping. Donald Rumsfeld was someone who kicked off a chapter on counter-terrorism that really didn’t shed much light on anything in particular for me.

With so much random baggage as contained in this book, it was inevitable to stumble across an area of personal interest. Unfortunately I was not impressed at all. The topic is earthquake prediction and the author isn’t just entertaining a healthy skepticism about the ability to make predictions, rather he ridicules certain approaches, specifically animal reactions as precursor signals. I’m not saying that animals predict earthquakes, but the USGS web site specifically says "However, much research still needs to be done on this subject." And far from there being no conceivable explanations, there definitely exist compatible hypotheses to go with the very large body of historical evidence, for example in the work of Thomas Gold.

This book didn’t really address the technical nuances of the mathematical tools we use to refine predictions. It didn’t investigate the important philosophical implications of probability. It didn’t thoroughly talk about the history or important people who have investigated the topic (especially minor but interesting people like Frank Ramsey, Bruno Di Fenetti, etc). It didn’t offer any compelling insights that will make my decision making better.

At 454 pages, that’s too much noise for not enough signal.