Homo Deus: A Brief History Of Tomorrow is the follow up book to Yuval Noah Harari’s Sapiens (which I reviewed here). It is hard to know what to say about this book. The first blurb on the back is from the freakishly insightful Daniel Kahneman and it immediately singles out this book’s core value.

It will make you think in ways you had not thought before.

Just so you don’t think I’m being lazy here, I had a look at today’s NYT hardcover nonfiction best sellers list and for each of the top 15 books, I calculated the percentage of Amazon reviews that contained the phrase "thought-provoking". Have a look.

1

Yes We (Still) Can - Dan Pfeiffer

0/51 = 0.0%

2

Calypso - David Sedaris

4/190 = 2.1%

3

The Soul of America - Jon Meacham

2/231 = 0.8%

4

How To Change Your Mind - Michael Pollan

3/128 = 2.3%

5

Trump’s America - Newt Gingrich

0/65 = 0.0%

6

Educated - Tara Westover

27/1389 = 1.9%

7

Bad Blood - John Carreyrou

1/402 = 0.3%

8

Lincoln’s Last Trial - Fischer & Abrams

0/36 = 0.0%

9

The Sun Does Shine - Anthony Ray Hinton

3/211 = 1.4%

10

Astrophysics For People In A Hurry - Neil dG Tyson

34/2851 = 1.2%

11

Born Trump - Emily Fox

0/64 = 0.0%

12

Barracoon - Zora Neale Hurston

3/177 = 1.7%

13

The World As It Is - Ben Rhodes

0/73 = 0.0%

14

Room To Dream - Lynch & McKenna

0/9 = 0.0%

15

Factfulness - Hans Rosling

6/258 = 2.3%

Total for NYT NF Top15: 83/6135 = 1.4%

That exercise itself was a bit thought-provoking. Check out how Harari’s book crushes this silly metric.

Homo Deus - Yuval Noah Harari - 128/1146 = 11.2%

Let’s look at a very typical example but one that I took a special interest in. Here he’s vaguely pondering the nature of consciousness (a topic I am especially interested in) without getting too precise about what he means by that word.

Maybe we need subjective experiences in order to think about ourselves? An animal wandering the savannah and calculating its chances of survival and reproduction must represent its own actions and decisions to itself, and sometimes communicate them to other animals as well. As the brain tries to create a model of its own decisions, it gets trapped in an infinite digression, and abracadabra! Out of this loop, consciousness pops out.

Fifty years ago this might have sounded plausible, but not in 2016. Several corporations, such as Google and Tesla, are engineering autonomous cars that already cruise our roads. The algorithms controlling the autonomous car make millions of calculations each second concerning other cars, pedestrians, traffic lights and potholes. The autonomous car successfully stops at red lights, bypasses obstacles and keeps a safe distance from other vehicles — without feeling any fear. The car also needs to take itself into account and to communicate its plans and desires to the surrounding vehicles, because if it decides to swerve right, doing so will impact on their behaviour. The car does all that without any problem — but without any consciousness either. The autonomous car isn’t special. Many other computer programs make allowances for their own actions, yet none of them has developed consciousness, and none feels or desires anything.

The photo on this page (p.115) is of Waymo’s Firefly/Koala (did it even have a proper name?). I’m pretty sure this particular specimen had absolutely no ambitions to talk to other cars. Brad Templeton who advised Waymo for this project has this to say about that issue.

[V2V is] definitely not necessary for the success of the cars, and the major teams have no plans to depend on them. Since there will always be lots of vehicles (and pedestrians and deer) with no transponders, it is necessary to get to "safe enough" with just your sensors. Extra information can at best be a minor supplement. Because it will take more than a decade to get serious deployment of V2V, other plans (such as use of the 4G and 5G mobile data networks) make much more sense for such information.

In addition, it is a serious security risk, as you say, to have the driving system of the car be communicating complex messages with random cars and equipment it encounters. Since the benefits are minor and the risk is high, this is not the right approach.

I point that out because this is one of the areas I know pretty well, and it could be that Harari is doing quite a bit of such hand waving.

The first part of the book makes a surprisingly animated attack on the idea of eating meat. I eat very little meat but I also have other topics higher on the list of philosophical issues to worry about. Still, if you’re a vegetarian, you’ll like the first part of the book.

Harari spends a decent amount of time letting you know that your mind is composed of different cognitive actors. Most people who know me have been exposed to that idea before. I do like his clever term "dividual" to describe our collection of cognitive contributors.

He talks here and there about science fiction topics. The title refers to what humans may "evolve" into — what will be beyond us (Homo sapiens) on the evolutionary tree.

Hence a bolder techno-religion seeks to sever the humanist umbilical cord altogether. It foresees a world that does not revolve around the desires and experiences of any humanike beings.

But when I read that, I wondered, why so complicated? Some people can already "upgrade" themselves with an ancient medical procedure that will almost always strongly reorient a person’s priorities — castration. But Harari doesn’t talk much about why men aren’t improving their lives with that technology upgrade, so I’m not quite sold on the inevitability of fancier computerized versions.

Don’t get me wrong. I would recommend the book. It is interesting even if slightly questionable here and there. It’s decently well-written and engaging. Whatever flaws this book had, it was definitely a rare champion of "thought-provoking".