I loved this long article in The Guardian. It covers quite a few topics that are important to me. Its general theme is how automation or making a task "easy" can actually have rather perverse consequences. It starts just talking about the absurdity of a particular plane crash but eventually winds up talking about autonomous cars and why traditional road signage is often not helpful to anyone, especially cyclists and pedestrians.
Here is an excerpt.
The paradox of automation, then, has three strands to it. First, automatic systems accommodate incompetence by being easy to operate and by automatically correcting mistakes. Because of this, an inexpert operator can function for a long time before his lack of skill becomes apparent – his incompetence is a hidden weakness that can persist almost indefinitely. Second, even if operators are expert, automatic systems erode their skills by removing the need for practice. Third, automatic systems tend to fail either in unusual situations or in ways that produce unusual situations, requiring a particularly skilful response. A more capable and reliable automatic system makes the situation worse.
Now having read that let me just plant three little letters in your mind: G-U-I. Nothing better describes my frustrations with pointless graphical interfaces, especially Windows. But it’s not just computer systems for "dummies" that tenaciously insulate you from the truth of what is going on. Even pretty serious computer work can have this paradox of automation.
Last week I spent about 20 hours solving a completely maddening problem with the latest version of CentOS Linux (7). A user complained that some required technical software stopped working. I discovered that not much of anything was working. I tried to use yum, the automatic package manager to do an update and it was broken too. Hard to use yum to fix things when it itself is broken. This is exactly the kind of problem that automation prepares us to concede as unresolvable.
I finally tracked the problem down to misbehavior in Python itself, which yum is written in. Fortunately I do have the skill to manually reinstall all packages having to do with Python and yum without using yum. Unfortunately, the problem still persisted. I then had the idea to check an identical system to see if it was working and then see if I could find any differences.
The highlight of the riddle came when I checked the
md5
hash fingerprint of the
Python executable on both systems. They were identical. However when I
ran each of them with the --version
option, they disagreed about
what version they were. The point is that simply doing yum update
to
keep this machine operating properly all these years was not what
prepared me to know how to actually discover the correct answer. (I
won’t spoil the riddle for you but if you’re curious
the answer is here.)
The article’s views on autonomous cars are not quite on track in my opinion (which I provide in my last post). They seem to be thinking more about adaptive cruise control and lane keeping systems. They’re basically envisioning an autopilot for cars that is similar to an autopilot for planes in the sense that humans should be ready to take over at any moment (like Tesla’s). I think all of the points of the article apply to that kind of system. The paradox correctly means that as a part-time driver you need to be more careful, not less.
The kind of full autonomous vehicle I’m envisioning will need human intervention as much as a train needs steering. My radical departure from mainstream thinking in the field is that road vehicle autonomy will require some operating environment concessions just as train steering does.
In light of this paradox, the goal for autonomous cars is clear. Passengers should need to worry about road driving to the same extent as airplane passengers should need to worry about flying. But behind the scenes, let’s hope the engineers designing and running the systems resist the temptation to go on autopilot themselves.