Today, in 2015, I believe that it is not possible to purchase a toaster that can reliably toast a piece of bread in a consistent fashion. Yet the recent chatter of the technophile interwebs makes it seem like our most pressing issue is preventing Skynet.
People I generally respect like Nick Bostrom, Sam Harris, and Stephen Hawking are sounding the alarm. The publicity magnet Elon Musk is also concerned. Crazy futurists and respectable science fiction authors have covered this ground before but for some reason, there seems to be a new wave of genuine concern.
I’m not buying it.
I have already noted Karl Popper’s important observation from 1953.
In constructing an induction machine we, the architects of the machine, must decide a priori what constitutes its "world"; what things are to be taken as similar or equal; and what kind of "laws" we wish the machine to be able to "discover" in its "world". In other words we must build into the machine a framework determining what is relevant or interesting in its world: the machine will have its "inborn" selection principles. The problems of similarity will have been solved for it by its makers who thus have interpreted the "world" for the machine.
— Karl Popper
This may not be a factor once AI starts mutating wildly away from human design. But if AI needs to evolve just like human minds did to acquire sinister desires, I figure we’ve got a decent head start.
The ever pragmatic and sensible Economist points out some rather prosaic analogues while simultaneously hedging against an extraordinary turn of events.
But even if the prospect of what Mr Hawking calls "full" AI is still distant, it is prudent for societies to plan for how to cope. That is easier than it seems, not least because humans have been creating autonomous entities with superhuman capacities and unaligned interests for some time. Government bureaucracies, markets and armies: all can do things which unaided, unorganised humans cannot. All need autonomy to function, all can take on life of their own and all can do great harm if not set up in a just manner and governed by laws and regulations.
The best analysis I’ve seen on the issue was an interesting coincidence for me. I am a regular reader of Tyler Cowen’s blog and he had Ramez Naam as a guest blogger covering this exact topic. I had just finished reading Naam’s excellent fiction novel Nexus (on a tip from none other than John Carmack) and was quite interested in what else he had to say. Naam has an even better excellent article about the threats (or not) of strong AI on his own website. In that article he hits upon the business I’m currently in, computational chemistry.
Computational chemistry started in the 1950s. Today we have literally trillions of times more computing power available per dollar than was available at that time. But it’s still hard. Why? Because the problem is incredibly non-linear.
I can personally affirm this by pointing out with no disrespect that the smartest people in the field know essentially nothing about how any of it works. What I mean by this is that in 100 years of X-ray crystallography’s existence, the major thing biochemists have truly learned is how much more there is yet to learn. Any kind of biochemistry is, like Naam says, extremely non-linear. We barely know what kinds of chemical bonds are possible or what magic actually makes them work. Moving from that precarious foundation to things like protein folding and molecular signaling, the complexity goes up. And not linearly.
This means that if you think we are going to comprehensively simulate organic human brains somehow with computers, I’ve got some bad news for you. If you think that we’ll be able to simulate some kinds of simplified neural networks, sure, that’s possible. But it is becoming apparent that simple neural networks, used as a standard engineering tool now, do not a superhuman intelligence make.
My own rationale for why a putative advanced AI will not be causing us too much trouble is the same as for why we haven’t killed all mosquitoes. They may be bothersome to us and we may kill any we can easily catch which are making nuisances of themselves, but despite (most of us) being smarter than the average mosquito, it would simply be a colossal waste of our resources to concern ourselves with every airborne pest of the Yukon Territory. Correspondingly, it is a huge conceit of vanity to believe that an especially intelligent being would have any interest in humans one way or another.
This whole discussion really is a type of Frankenstien (or, The Modern Prometheus) problem. Humans apparently are drawn to scary stories and the idea of a human creation somehow overpowering its creator is a classic theme in literature.
Stories of Midas, the tree of knowledge, genies, golems, and Faustian bargains have cautioned that getting supernatural help may not have been as great of an idea as it first seemed. For the computer age the tradition continues with R.U.R., the Czech play that introduced the word "robot" while depicting them engaged in insurrection against their human creators. That seems typical of a form of popular monster movies of the early 20th century. For example, King Kong is about a human-like force which was thought to be under control, but whose unplanned liberation turned out to cause more trouble than expected. Likewise Godzilla was inadvertently summoned by uncertain technology (nuclear weapons). This cultural environment seemed to have greatly influenced writers such as Asimov (Laws of Robotics), Arthur C. Clarke (HAL 9000), and Philip K. Dick (Blade Runner). These stories in turn have inspired newer works like Spielberg’s A.I., Her, and Transcendence.
I suspect that humans will never tire of this genre. There will always be a compelling new way to present the theme. One lesson to draw from this cultural perspective is that like Dr. Victor Frankenstein’s "fallen angel", the monster is really us.
UPDATE 2021-05-12
An old article by the brilliant writer Ted Chiang says all of this better than I could hope to.