A review of the book Super Crunchers: Why Thinking-By-Numbers Is The New Way To Be Smart by Ian Ayres.
This book should have been written, but not quite like this.
To be sure, the mixture of statistics with the increasingly large data sets of the Internet Age have created a new reality in the way humans plan and think. That is definitely important and a whole book on that topic isn’t unreasonable. This book however, despite being readable, fell short in some ways.
I personally, found the title and its constant use to be kind of annoying. "Super Crunch" sounds like a sugary breakfast cereal I would not want to eat. I was therefore surprised to learn that the title had been statistically tested against better (in my opinion) alternatives. To me, making up a deliberately catchy name for the book’s topic distracts from how interesting the topic really is in the same way that popular depictions of computer experts in movies detracts from how interesting computer science can be. But hey, what do I know?
The chapters were 24 pages on average (standard deviation of 5.8) which sometimes felt kind of long. Maybe the ideal chapter length could be explored with statistical analysis next time.
Several times I was a little annoyed with the conclusions and inferences the author made. Perhaps the worst was the discussion of how advanced statistical analysis with large data sets (no, I won’t be using the verb "to Super Crunch") could cause racism. This was, in my opinion, completely sensationalized. The author basically concluded that racist practices at service businesses would move from a personalized affair to one where the computer model was the offender. This implies that the computer models at banks and insurance companies would be set up to specifically make life difficult for certain racial groups. This is just not credible. Rather than many closet racist CEOs looking for ways to safely persecute certain racial groups, it is far more likely that companies are thinking of discriminating against the same group their shareholders have always wanted them to treat badly, people from whom the company can not extract the maximum amount of money. I believe that statistical techniques could create large racial discrepancies to be sure, but that is not racism nor even necessarily a bad thing. It might just objectively indicate a problem. Consider a hypothetical algorithm that looked at video camera images from a cop car and advised the cop how to treat the person based on how they were driving, type of car, etc. If it turns out that cops are advised to be extra alert for people who turn out to be from a certain racial group, that is interesting, but much less likely to be racist than if the cop just has a hunch. That’s really the whole point of the book and I’m surprised he missed this.
The last chapter seems a hodgepodge of page filler. He half-heartedly tells us about Bayes Theorem and some arbitrary rule of thumb for linking standard deviation to confidence level in percent. Even the final paragraph shows a bit of a lack of imagination with respect to the topic at hand. He says, "I doubt that we will see quantitative studies on the best way to… peel a banana." The whole point of the book is that you just might. Sure enough, type "peel a banana" into Youtube’s search and you’ll get a definite oversupply of videos, all with a quantitative study called ratings.
UPDATE 2018-01-21
Lots of people I know have been playing a giant game of chicken hoping someone else would write this article first so they didn’t have to – and it looks like Chris Stucchio and Lisa Mahapatra lost.
He’s referring to this article about how AI results don’t imply racism.
When I first read that I thought the same as Scott — ooh, ya, glad I didn’t write that. I had totally forgotten about this review and the fact that 8+ years earlier I had in fact made this exact point! Let’s just hope everyone continues to not notice me.