Chris X Edwards

Articles delivered as a series of Tweets are like a doctor smoking. I just can't get past that and take it seriously.
2019-12-04 07:13
To say Unix pioneers' passwords were bad would be like saying a 1950s lock box for mountaineers at the top of Everest has a weak lock.
2019-11-15 10:02
One of the best games on #Steam is choosing the best games on Steam.
2019-11-11 09:39
AMZN search:"beagle motor cape" (BeagleBone H-bridge motor ctl), I get a dog riding a motorbike in a cap. So much for unusual search terms!
2019-11-10 12:07
Information (entropy), Control (PID), Game (prisoners'), Set (infinity) - I'm starting to suspect that the "Theories" are one hit wonders.
2019-11-10 10:05
Blah Blah

Answers To Mysteries Of The Uber AV Fatality - Part 2

2019-11-17 06:26

Yesterday I posted Part 1 of my analysis of the NTSB's final report on the March 19, 2018 fatal collision of an Uber Advanced Technology Group (ATG) research autonomous vehicle into a pedestrian.

You may have noticed something unusual about Part 1 — I did not explore any technical issues related to autonomous driving. In my main post on the topic from last year and in Brad’s recent analysis there is some obvious attention to the technology.

I felt it was important to properly separate the two important relevant issues.

  1. Are idiot drivers terrible or the worst problem ever?

  2. Why exactly are autonomous vehicles not yet immaculate?

As I showed yesterday, the root cause of the fatal accident was an idiot human driver who happened to work for an autonomous vehicle research team. If an engineering team is designing a new kind of blender, we can’t say much about the safety of the blender if an intern mixes too many margaritas, gets drunk, throws the blender out a 10th story window, and kills a passerby with it. Of course if such an incident did occur, an official investigation might reveal all sorts of information of great interest to professional blender designers. That is what I am doing now. In some ways I feel bad for Uber because, as with the victim’s favorite recreational drugs, their internal operations are really none of our business. However, they do operate dangerous machinery in public and a bit of extra insight in this case seems like a fair trade.

The only thing I mentioned about autonomous vehicle technology in part 1 was this extremely provocative suggestion made by the Vehicle Operator.

Section 6f says, "When asked if the vehicle usually identifies and responds to pedestrians, the VO stated that usually, the vehicle was overly sensitive to pedestrians. Sometimes the vehicle would swerve towards a bicycle, but it would react in some way." Well, that seems on topic! At least to me! I believe I was the first uninvolved observer to notice that it seemed like the car steered into the victim. This has now been conclusively confirmed.

Let’s take a look at why I believe that the autonomous car did in fact steer into the victim.

The Vehicle Automation Report was fascinating to read. Autonomous car nerds should note that the radar made the first detection of the victim. It also got her velocity vector estimated immediately as a bonus. Of course it didn’t quite correctly guess that it was a person walking with a bicycle, but it did come up with a safe practical guess of "vehicle". Lidar comes in .4s later with no idea about the object’s nature, speed, or trajectory.

Here is a very important bit in 1.6.1. It’s a bit hard to parse, but read it carefully because — when combined with a grossly negligent safety driver — this is lethal.

However, if the perception system changes the classification of a detected object, the tracking history for that object is no longer considered when generating new trajectories. For such newly reclassified object [sic], the predicted path is dependent on its classification, the object’s goal; for example, a detected object in a travel lane that is newly classified as a bicycle is assigned a goal of moving in the direction of traffic within that lane. However, certain object classifications — other — are not assigned goals. For such objects, their currently detected location is viewed as a static location; unless that location is directly on the path of the automated vehicle, that object is not considered as a possible obstacle. Additionally, pedestrians outside a vicinity of a crosswalk are also not assigned an explicit goal. However, they may be predicted a trajectory based on observed velocities, when continually detected as a pedestrian.

If I’m reading that right, the system clears any historical path data about an object when it is reclassified. This seems crazy to me.

Let’s think this through with an Arizona style example. Let’s say that the perception system detects an object and classifies it as a tumbleweed.


Here the white arrow is the trajectory the system measures and the magenta arrow is the trajectory the system supposes will consequently follow. Because the object appears to be on a path to be out of the way when the system gets to where it is now, it can comfortably just keep heading straight ahead on the path shown in green.

But then it gets a new closer perspective on the object and it now changes its belief to thinking it is a cat.


If I understand this right, the trajectory of the tumbleweed is now discarded (!) and the cat object gets to start over with no trajectory data, indicated in white. Now for a future prediction about where the object is headed (magenta) it only has some preconceived idea of what cats do. And we all know that cats are lazy and mostly lounge around. (At least that’s what the internet is telling me.)

So even if the car could have neatly avoided the tumbleweed/cat by going straight (because it could see that the object was heading out of frame) once it changes its belief to the cat, it might try to swerve (green path) to avoid what it assumes is a lazy lasagne eating cat that it believes is unlikely to be sprinting across the road.

But what if it is a cat but not doing what cats normally do?


If the cat is moving like a tumbleweed — and importantly, not like how it believes cats move — you have a recipe for disaster. The system has thrown away valuable data about the object before it was known to be a cat (white). It believes cats are lazy and sedentary (magenta). It is planning to swerve to a new path to avoid the cat (green). If the cat or whatever it is, in fact, is moving fast as originally perceived (black) the misunderstanding presents an opportunity for a collision (red X). A collision that painfully looks to human reviewers like the car was going out of its way to swerve into the cat.

In the report’s specific example — the subject of the report and a topic I care a lot about — the class of interest is bicycles. If the perception system had a murky feeling that there was something out there but didn’t really know what (e.g. "other"), it would assume that it was stationary, perhaps like the giant crate/ladder/mattress type objects I’ve encountered randomly on the freeway. It may start to believe the object is moving after some time of actually tracking it. It may even have a decent sense of how to predict where things will wind up a few seconds into the future if nobody radically dekes. But if, as the perception system gets closer and re-evaluates things, it decides that it is a bicycle, all that motion data was discarded.

It is the inverse of my example with the same effect — the system decides that what it thought was a stationary object is really an active one, but in reality the stationary assessment was more correct. Once the object is given properties of a typical bike as the best approximation of what to expect, it is out of touch with the true situation. And that is how a pedestrian in an odd location consorting with (but not riding) a bicycle gets wildly misunderstood by a complex software planning system.

With that in mind it’s very chilling to read the event log ( Table 1) which shows the 5 seconds before the crash, when the lidar first became aware of the hazard; the system changed the classification approximately (the word "several" was used which I’m counting for 3) seven times, each time erasing trajectory information.

The report’s table 1 also confirms my analysis of the video that the car’s intended action was to cut inside the "cyclist" in preparation for a right turn. By erasing the victim’s trajectory at every muddled classification, the final guess used to make the best plan it could was that the victim was a cyclist and therefore likely to be going straight. This is exactly as I had predicted.

In my original post I called out this traffic engineering arrangement as the real crime here. Without trajectory data, assuming a cyclist in that location would be riding through the intersection like one of the strongest people in the neighborhood is entirely reasonable. Not having kept the contradictory trajectory data was an unfortunate mistake.

That explains the mistake that led to the fatal situation. But there was much more in the report. Incredibly it is also revealed that…

…the system design did not include a consideration for jaywalking pedestrians. Instead, the system had initially classified her as an other object which are not assigned goals [i.e. treated as stationary].

Damn. It’s easy to see why the classification system had trouble. Had they never seen jaywalking before? Very strange omissions.

So that’s bad. But there’s more badness. Braking.

"ATG ADS [Automated Driving System], as a developmental system is designed with limited automated capabilities in emergency situations — defined as those requiring braking greater than 7m/s2 or a rate of deceleration (jerk) greater than +/-5m/s3 to prevent a collision. The primary countermeasure in such situations is the vehicle operator who is expected to intervene and take control of the vehicle if the circumstances are truly collision-imminent, rather than due to system error/misjudgement.

I find this very strange. We had heard scary reports that Uber had turned off the Volvo’s native safety braking features. I figured that would be sensibly replaced by better ones. (Maybe they still are but…!) To limit braking effectiveness is madness. Yes, I understand that you need to not cause a pile up behind you but if your car needs to stop in an emergency, and it can stop, it should stop. To solve the problem of the mess behind, have rearward tailgating detection watching for that. If the rear is clear, there is no excuse to not stop at full race car driver deceleration when it is needed. I would think that letting the limit be whatever the ABS can handle would be best for stopping ASAP. To put ATG’s limit of 7m/s2 in perspective this paper makes me think that 7.5 is normal hard braking and ABS does about 8, with 9 possible.

To be fair you can see the problem here. The report says that out of 37 crashes in 18 months by Uber ATG vehicles in autonomous mode, 25 were rear end crashes (another 8 were other cars side swiping the AV). Clearly if you’re out with idiot drivers, you need to be watching your backside.

(Let’s round this out because in my original post I wondered if there is "a ton of non-fatal mishaps and mayhem caused by these research cars that goes unreported? Or was this a super unlucky first strike?" The report goes a long way in answering this question. Looks like Uber was very unlucky. The details are interesting: In one striking incident (pun intended) the safety driver had to take control, and swerve into a parked car to avoid an idiot human driver that had crossed over from the oncoming lane. In two more incidents the cars suffered from problems because a pedestrian had vandalized the sensors while it was stopped. WTF? And finally in one single incident that could be blamed on the software, the car struck a bent bicycle lane bollard that partially occupied its lane — we can presume an idiot driver had been there first to bend it.)

But this topic gets weirder.

  • if the collision can be avoided with the maximum allowed braking and jerk, the system executes its plan and engages braking up to the maximum limit,

  • if the collision cannot be avoided with the application of the maximum allowed braking, the system is designed to provide an auditory warning to the vehicle operator while simultaneously initiating gradual vehicle slowdown. In such circumstance, ADS would not apply the maximum braking to only mitigate the collision.

Wow. That is a very interesting all or nothing strategy. "Oh, hey, I couldn’t quite understand that was a bicyclist and it now looks like only computer actuated superhuman braking can avoid a collision — turning control back to you human. Good luck."

Note that this has nothing to do with the false positive problem. I’m assuming only situations where the system is more certain of needing emergency braking than a human would be. To say you want to back off potential braking effectiveness because of false positives is addressing the wrong problem.

The report notes one reason the Volvo ADAS was disabled was to prevent the "high likelihood of misinterpretation of signals between Volvo and ATG radars due to the use of same frequencies." Really? That’s a problem? Do they know what else uses the same frequency as a Volvo’s ADAS radar? Another Volvo! I’m a bit worried if this is a real problem. And the Volvo’s ADAS brake module was disabled because it "had not been designed to assign priority if it were to receive braking commands from both [the Volvo and Uber systems simultaneously]." Again, that’s weird. This calls for an OR, right? If either braking system is, for whatever reason, triggered, shouldn’t there be action?

At least this tragic incident pushed Uber ATG to really fix up their safety posture.

Since the crash, ATG has made changes to the way that the system responds in emergency situations, including the activation of automatic braking for crash mitigation.

Also several stock Volvo ADAS systems now remain active while in automated driving mode. They even changed the radar frequencies to avoid any chance of confusion there. They also upped the braking g force limits. All sensible changes.

I consider the main lethal bug to be the resetting of the trajectory information upon reclassification. Apparently, they duly fixed that too.

Overall I never got the impression that Uber ATG was careless or particularly unsafe. I felt like all of their procedures were pretty reasonable given the fact that they are forging ahead into the unknown of our transportation future. There will be bugs and there will be mistakes. Not properly understanding the full depravity of using a phone in a car may be a special Uber blind spot due to the nature of their primary service. But it really did seem like they tried their best to do things safely and with a good backup plan. These technical failures are merely interesting and should not be taken out of context. Maybe the Uber engineers were testing how some dynamic reacts to a known bad system and they normally use a better one (don’t laugh, I’ve done it personally). But if they were doing something like that or even if their system just had bugs that needed to be found and fixed, their engineering team was counting on their safety drivers for safety. And as with the overwhelming majority of automotive tragedies, it was the human in that plan that let everyone down.

Answers To Mysteries Of The Uber AV Fatality - Part 1

2019-11-16 14:38

On November 5, 2019 the National Transportation Safety Board released it’s final report on the March 19, 2018 fatal collision of an Uber Advanced Technology Group (ATG) research autonomous vehicle and a pedestrian. The report can be found here. I have written extensively about this important incident in a post with critical updates last year.

With the new information it has been interesting to revisit this important historical event. Let’s review the questions we then wanted to have answers for that we now do. Quoting myself from my 2018-03-22 update…

The biggest difference between humans and AI however is on fine display with the safety driver recording a perfect advertisement for fully autonomous vehicles ASAP. I timed that the safety driver was looking at the road and paying attention to the driving for about 4s in [the limited driver facing] video; that leaves 9s where the person is looking at something in the car. If it’s not a phone and it’s some legitimate part of the mission, that’s pretty bad. If it is a phone, well, folks, that’s exactly what I’m seeing on the roads every day in cars that haven’t been set up to even steer themselves in a straight line.

In my update (2018-06-23) responding to reports that the driver was watching videos I said this.

It does seem that either Uber is extremely negligent for having their driver look at the telemetry console instead of doing the safety driving OR the driver is extremely negligent for watching TV on the job. Yes Uber could have done better [with the car’s technology], but that’s what they’re presumably researching; dwelling on that is beside the point. What we need to find out is if Uber had bad safety driver policies or a bad safety driver. If the latter, throw her under the bus ASAP.

As this is the critical question with literally millions of lives (to be saved by autonomous vehicles) at stake, I was hopeful that the NTSB report would conclusively provide an answer. It did not. It did however present a lot of very strong evidence.

You and I are effectively jurors deciding this case, the outcome of which will guide our thinking about autonomous vehicle development. The defendant is Uber ATG’s autonomous vehicle research. Were the management and engineering guilty of wrongdoing which caused this fatal outcome? From the news publicity, it doesn’t look good for Uber. Let’s review the case more carefully.

(Travis Kalanick is very happy that) the report says, "ATG’s decision to go from having two VOs in the vehicle to just one corresponded with the change in Uber CEOs." The acronym "VO", for "vehicle operator", pops up a lot and is central to the case. The facts are that ATG previously used two safety drivers (VOs) per car and reduced that to one prior to and including the fatal incident. This is bad. With two safety drivers I am confident that this crash would not have happened. But it is also understandable. Why not three VOs per car? At some point there are diminishing returns and if they think the car is near ready to go with zero humans, using one could seem adequate. It’s not like Tesla’s autopilot requires a passenger to be present.

Next on the list of very damning evidence is the engineered distraction for the VO. The Vehicle Automation Report says, "…ATG equipped the vehicle with … a tablet … that affords interaction between the vehicle operator and the [Automated Driving System]." Basically ATG replaced the Volvo’s infotainment touchscreen system with something similar but specific to the mission. Requiring a VO attend to a screen while a 4800lb death machine runs amok in public is categorically a bad idea. We know this because Waymo uses a voice system to accomplish the same goals. However, almost all of the evidence led me to conclude that the system was actually no more distracting than the stock Volvo system. Note that I’m not absolving that! But it clearly wasn’t radically worse than what the public accepts in general.

I said "almost" all of the evidence suggested that the onboard computers were compatible with a safe outcome. Some of the most interesting evidence is from the investigation interview with the VO. Who obviously has a lot to gain by establishing that VOs were put in an untenable situation by design.

Indeed, according to the interview, the VO felt like the incident issue tracking system itself had "issues" and "…believed it was because the linux [sic] system did not work well with the Apple device."

Let’s look at a passage from the interview report that focuses on this engineered distraction.

6g. She stated that prior to the crash, multiple errors had popped up and she had been looking at the error list — getting a running diagnostic. 6h. She stated that when a CTC alert error occurs, she must tag and label the event. If the ipad doesn’t go out of autonomy then she has to tag and label the event. 6i. Her latest training indicated that she may look at the ipad for 5 seconds and spend 3 seconds tagging and labeling (she wasn’t certain this was stated in written materials from ATG).

She wasn’t certain it was officially documented because that is madness! Anyway, she’s painting a picture of a distracting technical interface associated with the project. That would definitely be bad.

It gets worse. Section 6f says, "When asked if the vehicle usually identifies and responds to pedestrians, the VO stated that usually, the vehicle was overly sensitive to pedestrians. Sometimes the vehicle would swerve towards a bicycle, but it would react in some way." Well, that seems on topic! At least to me! I believe I was the first uninvolved observer to notice that it seemed like the car steered into the victim. This has now been conclusively confirmed. It certainly doesn’t seem good for Uber.

The interview report makes no mention — not even questions that linger — about the VO doing some distracted stuff on her phone. It in fact accepts for the record in 6k that "…she had placed her personal phone in her purse behind her prior to driving the vehicle. Her ATG phone (company provided phone) was on the front passenger seat next to the [official mission related] laptop." Phone records showed that there were no calls or texts on the VO’s personal phone while on duty.

The report concluded that the VO does not drink alcohol and police found no reason to suspect intoxication of any kind. (The NTSB’s preliminary report couldn’t help but point out that, "Toxicology test results for the pedestrian were positive for [stuff which is none of our damn business]." This is how it should be. Should this impaired person have been driving? No. But she paid for that good judgement with her life.)

Uber ATG seems to have set up a bad system designed to impair a sober driver’s ability to respond to problems. The fact that I could not find in the report a vigorous defense of these systems, nor could I find any of the logged event data from the operator’s interface, was not encouraging. At this point in the story, it’s looking bad for Uber.

However! Whodunit mystery stories can change direction quickly. Often the guilty party is actively trying to cast suspicion on a scapegoat.

I said that I did not find a vigorous defense by ATG, but I did find this in the report, "For the remainder of the crash trip, the operator did not interact with the [official ATG vehicle management] tablet, and the HMI [human machine interface] did not present any information that required the operator’s input." I’m presuming that the NTSB were provided with logs and data to support this pretty definite conclusion. If this is true, it certainly makes the driver video, where she is looking at something, much more suspicious. It does leave open the possibility that the system was producing optional messages that, while not "requiring the operator’s input", were distracting.

The Washington Post reported back in June 2018 that local police say the VO was watching "The Voice" on her phone at the time of the incident. I commented on that report at the time, but it felt unofficial. I was hoping that this NTSB report would have better details to make this evidence incontrovertible.

In the Human Performance Group Chairman’s Factual Report section 1.3, "According to the police report, … she picked up a gray bag and removed a cell phone with a black case. She exited the facility and parked in the adjoining lot, where she then focused most of her attention on the center console of the SDV, where an Uber ATG-installed tablet computer was mounted. At 9:17 p.m., the VO appeared to reach toward an item in the lower center console area, near her right knee, out of site of the dashcam." They then go on to show photos of where they surmise an entertainment telephone may have been positioned.

Frustratingly the tone of the NTSB report lacks certainty about where the VOs attention was during the mission that ended in the crash. However, they do not contradict the June reports of video watching. In section 1.6, "Search warrants were prepared and sent to the three service providers. Data obtained from Hulu showed that the account had been continuously streaming video from 9:16 p.m. until 9:59 p.m. (the crash occurred at aproximately 9:58 [9:58:46.5])."

And that’s where the facts mostly end.

The vehicle operator was distracted and an undistracted operator would have prevented the fatality.

  • Uber ATG says their systems, which presumably are accurately logged, were not doing anything requiring distraction.

  • The VO says her personal distraction machine was safely in her purse, presumably butt-dialing Hulu to play The Voice.

I was unsatisfied with the plausibility, however remote it might seem to you, of two contradictory explanations. There is a lot at stake here. In the end, we have to make a decision about the most likely version of the truth.

There is one more source of evidence that we can look at. The NTSB helpfully provided a data file of human annotated labels of where the VO’s attention was during each half second for the 40 minutes prior to the crash. The plot of the VOs attention in the report was muddled and unhelpful. I wanted to have a proper look at it so I figured out how I could neatly visualize the state of each data point — no summarizing statistics, just the entire raw data as provided. As with many of life’s problems, the answer was ascii art. In this plot an "_" means the VO was looking down at something in the car and a "+" means that they were looking up at the road properly. The number is the minute after 21h00 on 2018-03-18 and each of 120 characters per row represent every half second of that minute.


We can see from these investigator annotations that the VO is horrendously distracted! The question we must all answer for ourselves is this: Does the character of this distraction suggest interaction with a technical system designed to log unusual mission events, or watching TV?

To make sure we integrate all knowledge possible, I have included highlights at the points where the Onboard Image & Data Recorder Group Chairman’s Factual Report makes additional specific text annotations about the video of the operator. Those annotations are the following.


"VO drank from soda-like bottle."


"VO drank from soda-like bottle."


"VO smirked."


"VO turned head and smirked at another Uber Technologies vehicle while stopped at a stop sign."


"VO smirked."


"VO yawned."


"VO yawned."


"VO laughed."


"VO smirked."


"VO appeared to be singing."


"VO appeared to be singing."


"VO yawned."


"VO was singing in the manner of an outburst."


"VO nodded no and then yes."


"VO nodded no."


"VO yawned and then smirked."


"VO yawned."


"VO drank from a soda-like bottle."


"VO reacted abruptly while looking forward of the vehicle."


"Impact occurred."

Everyone needs to make their own judgement, but my conclusion is that the vehicle operator was watching TV. In the report Uber ATG documents many commendable safety improvements, but solving the nonexistent problem of distracting the driver for 49.4% of the time is not one of them! I believe that is because it is a spurious false problem. Uber’s claim that their logging system did not log any events or distract the driver prior to the crash seems more reliable than the implicit claim by the VO that the video streaming to her phone was not being watched and smirked at. (Interestingly I used the exact word "smirk" last year in my review of the limited final 13 seconds of pre-crash driver footage released to the public.)

I don’t need this woman to be burned at the stake or even go to jail. What I need her to do, and that’s why I’m doing this, is to exonerate autonomous vehicle research efforts. This all too typical idiot driver was unlucky enough to explicitly kill a vulnerable road user. That of course bothers me, but I’m more concerned about the thousands she may yet kill, statistically, by retarding progress toward a world devoid of idiot drivers like her. She is, as I mentioned previously, a perfect advertisement for exactly why autonomous vehicle research is so important.

UPDATE 2019-11-17

Part 1 explains the fatality. It was caused the same way that most automotive fatalities are caused — by an idiot human driver. For a look inside the specific autonomous technology problems the Uber engineers are working on, see my part 2 which goes into some more detail about the technical challenges.

Snow Boating

2019-11-08 15:01

Here I am doing the final check for the final regular autonomous boating mission of the year.


And here’s what that looked like from the other side.


It was actually a fantastic day out on the water. Superb scenery and data. Although I would like to keep going out, it turns out that stern drive engines are strangely well insulated from the relatively warm water they might be sitting in, meaning that leaving such a boat on the water at this time of year without complex shore powered heating is likely to crack the engine block. Not the first time my gear failed to match my cold weather enthusiasm.

I feel like this boating season has been quite a success and I am now looking forward to the off season when I can focus more on the system architecture and AI voodoo.

Coolest Boater

2019-11-04 04:40

Yesterday morning @dpbuffalo kindly tweeted this photo.


Yes, it’s cold, but I like that sort of thing. At least I can just sit there with my hands in my pockets while the boat completely drives itself the entire route.

Here is the photo I took very near that spot in the reverse direction.


Notice the excellent clouds, lighting, tree conditons, water textures, lens flare, etc. The reason I keep pushing to go out is to collect fantastic data like this.

Here is a dramatic scene while the boat drives out of the marina.


Less than four minutes later, the lighting and water texture are quite different.


(This may be a good time to consider the profound difference bewtween stupid looking hats and stupid hats.)

Also shown is my favorite camera, a fun little side project which I built out of trash left over from some GoPro packaging plus microscope slide plus Raspberry Pi plus Linux. Unlike a GoPro, it is rain proof while connected to power. You can imagine that being useful to me given the weather I go out in.

Here’s a sample from a couple of days ago in the rain.


And my trash camera floats! This makes it easier to retrieve valuable data if it ever goes overboard. A similar consideration is also wise when choosing hat color!

Review: The Mythical Man Month

2019-10-06 19:33

If you’re an experienced professional computer nerd you have most likely heard of The Mythical Man Month by Frederick P. Brooks, Jr. or at least some reference to something from it. I certainly had. Then I came across some clever wisdom attributed to it and I thought, that’s clever and wise — I should read that book. I’m glad I did!

It was a bit of a historical tour through the culture of software engineering as it was when my grandfather experienced it. But the reason to read it today is that it is quite apposite now too. Some dated examples show that this book was specifically written about software projects of the mid 1960s. They are astonishingly relevant. Nothing new in software! The book is nice and short at 177 pages — perhaps reflecting an era when computer documentation and code alike were much more economical.

I’m going to just jot down my thoughts about interesting passages, a style not unlike the book itself.

I like how Brooks labels this "the world’s largest undocumented computer".


"Human beings are not accustomed to being perfect, and few areas of human activity demand it. Adjusting to the requirement for perfection is, I think, the most difficult part of learning to program." [p8]

This is funny because it is so true. I think about professionals like lawyers and even engineers who imagine they’re getting to some kind of objective outcome. From my perspective of trying to sway the judge and jury that is a C compiler, I feel like other professions mostly don’t have a clue how serious this perfection requirement is with programming. It is literally inhuman.

"The dependence upon others has a particular case that is especially painful for the system programmer. He depends upon other people’s programs. These are often maldesigned, poorly implemented, incompletely delivered (no source code or test cases), and poorly documented. So he must spend hours studying and fixing things that in an ideal world would be complete, available, and usable." [p8]

Ug. Yes, this is my most oppressive problem in getting computers to do what I want. Today everyone is a system programmer. People (professionals in the business!) desperately want to understand as little as possible about every part of any kind of computer solution. They cling to a morass of frameworks and wrappers and containers and wizards and magic cloud APIs until it’s just a complete mess. Then if they’re really good they proudly present a "solution" that kind of works in some cases… for another few weeks until one of the nodes in their dependency graph dies.

"The real tiger is never a match for the paper one, unless actual use is wanted. Then the virtues of reality have a satisfaction all their own." [p9]

So true! I’ve definitely been the victim of that one. For example, I love the people who criticize my web site and then when I ask to see theirs it does not exist.

"Men and months are interchangeable commodities only when a task can be partitioned among many workers with no communication among them." [p16]

This is one of the key principles that this book established in software engineering. I don’t think it is properly taken into account most of the time. As a side note, that quote could be from a lesson on programming modern GPUs.

How he schedules software tasks [p20]:

1/3 planning
1/6 coding
1/4 component test and early system test
1/4 system test, all components in hand
1/3 planning 1/6 coding 1/4 component test 1/4 system test

He points out that this is a larger budget for planning than is typical. Today I feel that most software has no planning whatsoever. I have to say that his ratios are pretty close to my habits. I may even go a bit higher on planning if "thinking about the problem" is included.

Brooks' Law:

"Adding manpower to a late software project makes it later." [p25]

I’ve definitely seen that one up close and personal!

He likes well architected software projects and I pretty much agree.

"Simplicity and straightforwardness proceed from conceptual integrity. Every part must reflect the same philosophies and the same balancing of desiderata. Every part must even use the same techniques in syntax and analogous notions in semantics. Ease of use, then, dictates unity of design, conceptual integrity." [p44]

This is the reason why projects mostly conceived by one clever person are often very compelling. And here is why those clever people can often pull off projects that would seem too ambitious.

p 142

"In short, conceptual integrity of the product not only makes it easier to use, it also makes it easier to build and less subject to bugs."

Brooks points out that that limitations can be good for software.

"Form is liberating." [p47]

Meaning that when constrained by some budget (money, CPU, memory, etc) people get creative. True!

Funny to see examples of things computer people used to obsess over. Things like sorting which modern computer science classes still belabor were once legitimately interesting. He proudly cites a feature of his project that properly deals with leap year dates, something that is taken for granted down to very minute details now.

In describing the typical cost considerations of a computing project in the 1960s, he sounds an awful lot like he’s describing "the cloud".

"On a model 165, memory rents for about $12 per kilobyte per month. If the program is available full-time, one pays $400 software rent and $1920 memory rent for using the program…" [p56]


"Since size is such a large part of the user cost of a programming system product, the builder must set size targets, control size, and devise size-reduction techniques, just as the hardware builder sets component-count targets, controls component count, and devises count-reduction techniques. Like any cost, size itself is not bad, but unnecessary size is." [p98]

The earlier remark about systems programmers depending on others is related to this too. Today people think nothing of importing billions of bytes of who-knows-what to create a small thing that I can write in in hundreds of bytes. The reason is that they import some magic library which imports another magic library which imports… turtles. But that’s not the end of it. Each magic library in the infinite chain considers itself the comprehensive source for all things related to that functionality, functionality you may only need .01% of!

"No amount of space budgeting and control can make a program small. That requires invention and craftsmanship." [p100]

Which is very hard to find in the world of computing today. I actually tend to find better instincts with respect to this from electrical engineers than CS graduates.

The book encourages detailed written specifications. Unsurprisingly Brooks, who thought to write a book, is a decent writer.

"Only when one writes do the gaps appear and the inconsistencies protrude. The act of writing turns out to require hundreds of mini-decisions, and it is the existence of these that distinguishes clear, exact policies from fuzzy ones. …only the written plan is precise and communicable."

Talking about managerial vs. technical staff provisions:

"Offices have to be of equal size and appointment." [p119]

Hilarious, right? The professionals who created software in the 1960s and 1970s had actual offices. If you’re in a Silicon Valley tech bro mosh pit and you don’t feel like a chump about that, well, you’re less easily aggrieved than I.

"…the fixing of the documented component bugs will surely inject unknown bugs." [p148]

Debugging is a two steps forward, one step back sort of process — if you’re good at it. And it is not just debugging. I recently had a demo which caused all progress to come to a grinding halt. I wanted to add more planned features and improvements, but I dare not introduce the showstopping bug that spoiled showing off what I had already attained.

He describes [p149] a "purple wire" technique for hardware debugging in ancient times. They would replace the normal yellow wires with purple wires to make modifications. Once the modifications were present or after enough time elapsed, they would get replaced with yellow wire. That made it easy to see which wires they had just hastily fiddled with hoping to fix things. He tries to come up with a software version of that but it’s a hard problem.

"Add one component at a time. This precept, too, is obvious, but optimism and laziness tempt us to violate it." [p149]

I have to say, I’m pretty good with this one. Mostly out of laziness combined with a deep pessimism that nothing will ever work right on the first try.

When I was a lad flow charts were still something that many people accepted on faith as a good idea. It was pretty easy to see that great software did not often come along with great flow charting so I kept an open mind. Turns out that flow charts were already dead even way back then.

"The detailed blow-by-blow flow chart, however, is an obsolete nuisance, suitable only for initiating beginners into algorithmic thinking. When introduced by Goldstine and von Neumann the little boxes and their contents served as a high-level language, grouping the inscrutable machine-language statements into clusters of significance." "In fact, flow charting is more preached than practiced. I have never seen an experienced programmer who routinely made detailed flow charts before beginning to write programs. Where organizational standards require flow charts, these are almost invariably done after the fact."

I guess what he’s saying is that von Neumann (who was clearly amazing) created flow charts because he didn’t have a more developed higher language. Flowcharts probably helped inspire the design of modern computer languages which have served to replace them. (Makes you wonder about UML, doesn’t it?)

That should give you a decent sense of what this book is like. If you’re interested in software engineering and how old ideas are often still good ideas, check it out.


For older posts and RSS feed see the blog archives.
Chris X Edwards © 1999-2019