Chris X Edwards

Very British-American Problems: In one of the few US roundabouts an idiot driver who doesn't know which exit to take just stops!
2020-01-21 13:48
A winter day of temperate rain in the North - or as I like to call it, Free Car Wash Day!
2020-01-10 13:52
"Unlock The Secret To Avoiding Self-Help Books". I think that would be my self-help book's title. It has probably already been written.
2020-01-02 10:51
Articles delivered as a series of Tweets are like a doctor smoking. I just can't get past that and take it seriously.
2019-12-04 07:13
To say Unix pioneers' passwords were bad would be like saying a 1950s lock box for mountaineers at the top of Everest has a weak lock.
2019-11-15 10:02
Blah Blah

GTAV - On Linux

2019-12-31 19:40

It’s been a very tough December for me. After a lifetime of avoiding serious illness, I’ve spent the past four weeks fighting either pneumonia or something functionally identical. I had planned to write about some interesting and intelligent stuff, but with my brain half gone we’ll have to lower expectations.

The holidays are a time when people like me are desperate to avoid going to places other people go to. It’s a good time to get caught up on sitting in front of a computer for dozens of hours at a time. With half my brain out of action, it’s specifically a good time to get caught up with my video game agenda.

To quickly review, I’m a serious video game enthusiast who has traditionally not played video games. This is because I’ve also traditionally been even more of an enthusiast for stubbornly not using terrible operating systems. But times are changing!

In 1993 the Wine Project was started. Yes, pretty much as soon as Microsoft Windows came out there were people who despised it enough to want to put it out of business. First we need a quick technical digression.

Your operating system manages resources and blah blah blah. I’ve argued for decades now that normal people don’t know what their operating system does and they emphatically do not care. So why the fuss? Well, the real consequence of not using Big Brother’s ubiquitous OS, is that, thanks to network effects, you will not be able to run any cool software.

Why is this? The cool software you run in Windows is compiled for the chip inside your computer. Why exactly do you specifically need Windows on hand to run the compiled code? The answer is that software which is expecting Windows makes system calls. An easy example is reading some data from a file. You have some cool software and it must get some data off the disk. In its executable, it makes a system call asking the program that manages system resources (the OS, remember?), to go find, verify, open, and read that data, hopefully without screwing it up.

The Wine Project was a stupendously ambitious initiative to enumerate all of the things Windows is ever asked to do, and to instead handle those requests using the Linux way of doing things. When I first encountered the project 20 years ago, basically nothing worked. They were targeting things like Windows' Solitaire with unstable results. But these guys are stubborn, not stupid. Twenty years of dogged grinding has actually worked! Today, there are very few things that a program designed for Windows can ask of the operating system that Linux and Wine can’t intercept and handle natively, often with better performance.

Years ago, Wine could already handle pretty much any typical banal enterprise software. In the last few years, Wine has aimed even higher striving to support the most complex and sophisticated software in existence: games.

A couple of years ago, I wrote about the strange but welcome push by Valve to aggressively promote games that ran natively on Linux. Despite a vanishingly small percentage of players who actually use Linux (like me) they were very keen on expanding Linux options. This surely ties in to their SteamOS (which is Linux) agenda and perhaps some other strategic motives they require to keep predators away.

That’s the history. Valve has helped induce a huge number of native Linux games and that has been nice. But it turns out that a lot of blockbuster mega hit games were still Windows only. The new development in this space is that today there is some very aggressive activity to use Wine to tame even those very challenging popular games and bring them to Linux. Maybe it was just too hard before now, but I am amazed to finally see a correct emphasis on the games most people play most of the time. What is more amazing is that these efforts are now very often successful! (In fact, I would say that at this time the vast majority of Windows gaming that is done that can not be done on Linux is because of anti-cheat measures for on-line play that explicitly look for Windows per se. I believe Battleye is the prime example.)

Valve has created a system called Proton which they hope will make Wine easier to bridge the gap from Linux to popular Windows-only games. There is a Proton Database which keeps a current account of what is working and what still has problems.


There are also impressive projects like Lutris which aspire to allow Linux users the ability to play Windows (and Linux) games from a centralized place that isn’t Steam. Imagine having control over the software you own — some of us compulsively do that!


While the gaming world barely notices, it is truly a golden age for Linux gaming. I have always maintained that the only reason Windows has been used at all since about 1995 is games (IT staffs over-represent gamers — Windows in the IT department naturally follows). Separate games from Windows and I believe that it will have no reason to exist and Windows can finally die. This is why, in my opinion, this topic is important even if you have no interest in games at all.

And with that explained, we can now look at how I was able to finally explore what I consider to be the greatest cultural artifact of the 201Xs: Grand Theft Auto V. This is a game that ostensibly does not run on Linux. However, now it does!

Released (for Windows) in the middle of the decade this game made a huge fuss and a shit ton of money ($6e9). The fuss was because the 115 million units sold caused a lot of bystanders to notice for the first time that video games can be wildly anti-social and GTAV was actively pushing the boundaries of how disgusting a piece of art could possibly be.

But a piece of art it is. The lead writer, Dan Houser, is a British guy who is able to show Southern California as it truly is. As a British-Californian who lived in San Diego for decades, the dark truth slowly crept into focus (I am so happy to have left). Yes, this game involves psychopathic criminal atrocities. I can’t really get into that. But the far more brutal aspect of the game is the vicious satire. I fell in love with this game instantly, its true message well known to me.

I’m usually a pretty well-adjusted and happy person. I sometimes feel a bit of psychological tension about how much I love vicious brutal satire. For example, I loved David Rees' Get Your War On, a comic that was so intense that I think it drove the author a bit insane. My favorite thing on the internet is n-gate. I don’t watch much TV but whatever my second favorite show of all time is, it’s way, way down below the inky blackness of Black Mirror. And this is how GTAV is, darkly excoriating the stupidity of moving to California.

I haven’t really done any of the missions (it all looks well organized from a game play standpoint if you like committing horrendous crimes). I’m just too captivated by roaming the massive simulacrum of Southern California. I feel I recognize it at every turn.


Here is a shot of me playing with a speedboat that my character recently grand thefted. It turns out that playing with speedboat simulations is more than a hobby with me! Despite needing proper nav lights and maybe the steering wheel on the side it normally is on in boats, this simulated speedboat was pretty damn good. The diversity of vehicles is impressive. Their fictional brand of Jetski is "Speedophile" which is what I will not be able to resist calling such craft in the future.

But there’s more. That mountain — it is perfect! Is it Tecate Peak? Otay Mountain? I’ve climbed so many that look exactly like that. And how is there boating? (There is no inland boating in real California.) This is in some kind of inland lake which would be comically incorrect except for the fact that in Imperial County there is the disgusting Salton Sea. Although the GTAV town of Sandy Shores is believed to be modeled after the Salton Sea’s "town" of Desert Shores (where I once camped behind the fire station), I would say that it is actually the spitting image of Bombay Beach on the downwind side of the "Sea" (yes the water stinks horribly in real life).


In real life I like to learn about an area by finding the worst places in it. I’ve done that in real life in Southern California and GTAV is a giant catalog of them. I enjoy wandering in the game looking at the idiotic talentless graffiti, the details of the garbage everywhere, the homeless under bridges, and dumpsters — dumpsters everywhere! The ads and billboards are brutal — one had an homage to California’s non-recourse mortgage law: "Stop paying your mortgage! The banks failed you! Time to fail them. Learn how." Pure bubble California.

The advertisements in general were sublime. I spent literal hours watching all of the extensive TV programming, complete with absurd inane ads that are spooky close to real. Same for the hours of brilliant radio programming. The talk shows — OMG. There is an entire cell phone system so that your character — like the entirety of the population — can be staring at a phone non-stop. I skipped that. I also skipped the crazy detailed fictional internet which I’m told allows one to buy stocks and other fancy things. The main feature of the internet is Lifeinvader, a Facebook placeholder. The satire comes fast and furious and there is no real way to document it. But my mind was blown.

One I did jot down was an ad for real estate deep into the desert, a classic Southern Californian swindle. The copy was, "Enjoy life where none exists!" (In the Grand Sonora Desert.) This echoes what I’ve said for years — San Diego is literally in the Sonoran Desert, and if you are not a mega-millionaire, you can not afford to live close enough to the ocean to ameliorate the very harsh conditions of living where no humans should live.

But it’s not all bad, is it? What I enjoyed and miss about California is cycling in the majestic mountains. And here GTAV did not let me down! I spent tons of time riding this bike around enjoying fond memories of the real thing.


As an enthusiast of traffic simulations I spent quite a lot of time studying the AI traffic — it’s not perfect, but given the scope, it’s pretty impressive. In 2016 I wrote a post that linked to this funny GTA video to demonstrate the difficulty of autonomous driving. To be fair the AI drivers usually do better than that!

There is one aspect of criminality and disrespect for the law that came easy to me — ignoring traffic laws. Once I got that bicycle it felt completely normal to only look at lights as hints to what the clumsy AI drivers were going to do. Survive is the law in GTAV just as it is my only law on the bike. And now that I’ve been dragged into the wraith world by some kind of very serious pathogen, survive is also the only priority in my indoor life right now. I think I’m recovering (let’s just say it was worse). Hopefully you found this last post of the decade worth reading even if my brain wasn’t fully enaged.

I think the 20s are going to be good. I think they could be anyway. We have reason for optimism. Even in California — at least they’ve got a couple less people to support!

Answers To Mysteries Of The Uber AV Fatality - Part 2

2019-11-17 06:26

Yesterday I posted Part 1 of my analysis of the NTSB's final report on the March 19, 2018 fatal collision of an Uber Advanced Technology Group (ATG) research autonomous vehicle into a pedestrian.

You may have noticed something unusual about Part 1 — I did not explore any technical issues related to autonomous driving. In my main post on the topic from last year and in Brad’s recent analysis there is some obvious attention to the technology.

I felt it was important to properly separate the two important relevant issues.

  1. Are idiot drivers terrible or the worst problem ever?

  2. Why exactly are autonomous vehicles not yet immaculate?

As I showed yesterday, the root cause of the fatal crash was an idiot human driver who happened to work for an autonomous vehicle research team. If an engineering team is designing a new kind of blender, we can’t say much about the safety of the blender if an intern mixes too many margaritas, gets drunk, throws the blender out a 10th story window, and kills a passerby with it. Of course if such an incident did occur, an official investigation might reveal all sorts of information of great interest to professional blender designers. That is what I am doing now. In some ways I feel bad for Uber because, as with the victim’s favorite recreational drugs, their internal operations are really none of our business. However, they do operate dangerous machinery in public and a bit of extra insight in this case seems like a fair trade.

The only thing I mentioned about autonomous vehicle technology in part 1 was this extremely provocative suggestion made by the Vehicle Operator.

Section 6f says, "When asked if the vehicle usually identifies and responds to pedestrians, the VO stated that usually, the vehicle was overly sensitive to pedestrians. Sometimes the vehicle would swerve towards a bicycle, but it would react in some way." Well, that seems on topic! At least to me! I believe I was the first uninvolved observer to notice that it seemed like the car steered into the victim. This has now been conclusively confirmed.

Let’s take a look at why I believe that the autonomous car did in fact steer into the victim.

The Vehicle Automation Report was fascinating to read. Autonomous car nerds should note that the radar made the first detection of the victim. It also got her velocity vector estimated immediately as a bonus. Of course it didn’t quite correctly guess that it was a person walking with a bicycle, but it did come up with a safe practical guess of "vehicle". Lidar comes in .4s later with no idea about the object’s nature, speed, or trajectory.

Here is a very important bit in 1.6.1. It’s a bit hard to parse, but read it carefully because — when combined with a grossly negligent safety driver — this is lethal.

However, if the perception system changes the classification of a detected object, the tracking history for that object is no longer considered when generating new trajectories. For such newly reclassified object [sic], the predicted path is dependent on its classification, the object’s goal; for example, a detected object in a travel lane that is newly classified as a bicycle is assigned a goal of moving in the direction of traffic within that lane. However, certain object classifications — other — are not assigned goals. For such objects, their currently detected location is viewed as a static location; unless that location is directly on the path of the automated vehicle, that object is not considered as a possible obstacle. Additionally, pedestrians outside a vicinity of a crosswalk are also not assigned an explicit goal. However, they may be predicted a trajectory based on observed velocities, when continually detected as a pedestrian.

If I’m reading that right, the system clears any historical path data about an object when it is reclassified. This seems crazy to me.

Let’s think this through with an Arizona style example. Let’s say that the perception system detects an object and classifies it as a tumbleweed.


Here the white arrow is the trajectory the system measures and the magenta arrow is the trajectory the system supposes will consequently follow. Because the object appears to be on a path to be out of the way when the system gets to where it is now, it can comfortably just keep heading straight ahead on the path shown in green.

But then it gets a new closer perspective on the object and it now changes its belief to thinking it is a cat.


If I understand this right, the trajectory of the tumbleweed is now discarded (!) and the cat object gets to start over with no trajectory data, indicated in white. Now for a future prediction about where the object is headed (magenta) it only has some preconceived idea of what cats do. And we all know that cats are lazy and mostly lounge around. (At least that’s what the internet is telling me.)

So even if the car could have neatly avoided the tumbleweed/cat by going straight (because it could see that the object was heading out of frame) once it changes its belief to the cat, it might try to swerve (green path) to avoid what it assumes is a lazy lasagne eating cat that it believes is unlikely to be sprinting across the road.

But what if it is a cat but not doing what cats normally do?


If the cat is moving like a tumbleweed — and importantly, not like how it believes cats move — you have a recipe for disaster. The system has thrown away valuable data about the object before it was known to be a cat (white). It believes cats are lazy and sedentary (magenta). It is planning to swerve to a new path to avoid the cat (green). If the cat or whatever it is, in fact, is moving fast as originally perceived (black) the misunderstanding presents an opportunity for a collision (red X). A collision that painfully looks to human reviewers like the car was going out of its way to swerve into the cat.

In the report’s specific example — the subject of the report and a topic I care a lot about — the class of interest is bicycles. If the perception system had a murky feeling that there was something out there but didn’t really know what (e.g. "other"), it would assume that it was stationary, perhaps like the giant crate/ladder/mattress type objects I’ve encountered randomly on the freeway. It may start to believe the object is moving after some time of actually tracking it. It may even have a decent sense of how to predict where things will wind up a few seconds into the future if nobody radically dekes. But if, as the perception system gets closer and re-evaluates things, it decides that it is a bicycle, all that motion data was discarded.

It is the inverse of my example with the same effect — the system decides that what it thought was a stationary object is really an active one, but in reality the stationary assessment was more correct. Once the object is given properties of a typical bike as the best approximation of what to expect, it is out of touch with the true situation. And that is how a pedestrian in an odd location consorting with (but not riding) a bicycle gets wildly misunderstood by a complex software planning system.

With that in mind it’s very chilling to read the event log ( Table 1) which shows the 5 seconds before the crash, when the lidar first became aware of the hazard; the system changed the classification approximately (the word "several" was used which I’m counting for 3) seven times, each time erasing trajectory information.

The report’s table 1 also confirms my analysis of the video that the car’s intended action was to cut inside the "cyclist" in preparation for a right turn. By erasing the victim’s trajectory at every muddled classification, the final guess used to make the best plan it could was that the victim was a cyclist and therefore likely to be going straight. This is exactly as I had predicted.

In my original post I called out this traffic engineering arrangement as the real crime here. Without trajectory data, assuming a cyclist in that location would be riding through the intersection like one of the strongest people in the neighborhood is entirely reasonable. Not having kept the contradictory trajectory data was an unfortunate mistake.

That explains the mistake that led to the fatal situation. But there was much more in the report. Incredibly it is also revealed that…

…the system design did not include a consideration for jaywalking pedestrians. Instead, the system had initially classified her as an other object which are not assigned goals [i.e. treated as stationary].

Damn. It’s easy to see why the classification system had trouble. Had they never seen jaywalking before? Very strange omissions.

So that’s bad. But there’s more badness. Braking.

"ATG ADS [Automated Driving System], as a developmental system is designed with limited automated capabilities in emergency situations — defined as those requiring braking greater than 7m/s2 or a rate of deceleration (jerk) greater than +/-5m/s3 to prevent a collision. The primary countermeasure in such situations is the vehicle operator who is expected to intervene and take control of the vehicle if the circumstances are truly collision-imminent, rather than due to system error/misjudgement.

I find this very strange. We had heard scary reports that Uber had turned off the Volvo’s native safety braking features. I figured that would be sensibly replaced by better ones. (Maybe they still are but…!) To limit braking effectiveness is madness. Yes, I understand that you need to not cause a pile up behind you but if your car needs to stop in an emergency, and it can stop, it should stop. To solve the problem of the mess behind, have rearward tailgating detection watching for that. If the rear is clear, there is no excuse to not stop at full race car driver deceleration when it is needed. I would think that letting the limit be whatever the ABS can handle would be best for stopping ASAP. To put ATG’s limit of 7m/s2 in perspective this paper makes me think that 7.5 is normal hard braking and ABS does about 8, with 9 possible.

To be fair you can see the problem here. The report says that out of 37 crashes in 18 months by Uber ATG vehicles in autonomous mode, 25 were rear end crashes (another 8 were other cars side swiping the AV). Clearly if you’re out with idiot drivers, you need to be watching your backside.

(Let’s round this out because in my original post I wondered if there is "a ton of non-fatal mishaps and mayhem caused by these research cars that goes unreported? Or was this a super unlucky first strike?" The report goes a long way in answering this question. Looks like Uber was very unlucky. The details are interesting: In one striking incident (pun intended) the safety driver had to take control, and swerve into a parked car to avoid an idiot human driver that had crossed over from the oncoming lane. In two more incidents the cars suffered from problems because a pedestrian had vandalized the sensors while it was stopped. WTF? And finally in one single incident that could be blamed on the software, the car struck a bent bicycle lane bollard that partially occupied its lane — we can presume an idiot driver had been there first to bend it.)

But this topic gets weirder.

  • if the collision can be avoided with the maximum allowed braking and jerk, the system executes its plan and engages braking up to the maximum limit,

  • if the collision cannot be avoided with the application of the maximum allowed braking, the system is designed to provide an auditory warning to the vehicle operator while simultaneously initiating gradual vehicle slowdown. In such circumstance, ADS would not apply the maximum braking to only mitigate the collision.

Wow. That is a very interesting all or nothing strategy. "Oh, hey, I couldn’t quite understand that was a bicyclist and it now looks like only computer actuated superhuman braking can avoid a collision — turning control back to you human. Good luck."

Note that this has nothing to do with the false positive problem. I’m assuming only situations where the system is more certain of needing emergency braking than a human would be. To say you want to back off potential braking effectiveness because of false positives is addressing the wrong problem.

The report notes one reason the Volvo ADAS was disabled was to prevent the "high likelihood of misinterpretation of signals between Volvo and ATG radars due to the use of same frequencies." Really? That’s a problem? Do they know what else uses the same frequency as a Volvo’s ADAS radar? Another Volvo! I’m a bit worried if this is a real problem. And the Volvo’s ADAS brake module was disabled because it "had not been designed to assign priority if it were to receive braking commands from both [the Volvo and Uber systems simultaneously]." Again, that’s weird. This calls for an OR, right? If either braking system is, for whatever reason, triggered, shouldn’t there be action?

At least this tragic incident pushed Uber ATG to really fix up their safety posture.

Since the crash, ATG has made changes to the way that the system responds in emergency situations, including the activation of automatic braking for crash mitigation.

Also several stock Volvo ADAS systems now remain active while in automated driving mode. They even changed the radar frequencies to avoid any chance of confusion there. They also upped the braking g force limits. All sensible changes.

I consider the main lethal bug to be the resetting of the trajectory information upon reclassification. Apparently, they duly fixed that too.

Overall I never got the impression that Uber ATG was careless or particularly unsafe. I felt like all of their procedures were pretty reasonable given the fact that they are forging ahead into the unknown of our transportation future. There will be bugs and there will be mistakes. Not properly understanding the full depravity of using a phone in a car may be a special Uber blind spot due to the nature of their primary service. But it really did seem like they tried their best to do things safely and with a good backup plan. These technical failures are merely interesting and should not be taken out of context. Maybe the Uber engineers were testing how some dynamic reacts to a known bad system and they normally use a better one (don’t laugh, I’ve done it personally). But if they were doing something like that or even if their system just had bugs that needed to be found and fixed, their engineering team was counting on their safety drivers for safety. And as with the overwhelming majority of automotive tragedies, it was the human in that plan that let everyone down.

Answers To Mysteries Of The Uber AV Fatality - Part 1

2019-11-16 14:38

On November 5, 2019 the National Transportation Safety Board released it’s final report on the March 19, 2018 fatal collision of an Uber Advanced Technology Group (ATG) research autonomous vehicle and a pedestrian. The report can be found here. I have written extensively about this important incident in a post with critical updates last year.

With the new information it has been interesting to revisit this important historical event. Let’s review the questions we then wanted to have answers for that we now do. Quoting myself from my 2018-03-22 update…

The biggest difference between humans and AI however is on fine display with the safety driver recording a perfect advertisement for fully autonomous vehicles ASAP. I timed that the safety driver was looking at the road and paying attention to the driving for about 4s in [the limited driver facing] video; that leaves 9s where the person is looking at something in the car. If it’s not a phone and it’s some legitimate part of the mission, that’s pretty bad. If it is a phone, well, folks, that’s exactly what I’m seeing on the roads every day in cars that haven’t been set up to even steer themselves in a straight line.

In my update (2018-06-23) responding to reports that the driver was watching videos I said this.

It does seem that either Uber is extremely negligent for having their driver look at the telemetry console instead of doing the safety driving OR the driver is extremely negligent for watching TV on the job. Yes Uber could have done better [with the car’s technology], but that’s what they’re presumably researching; dwelling on that is beside the point. What we need to find out is if Uber had bad safety driver policies or a bad safety driver. If the latter, throw her under the bus ASAP.

As this is the critical question with literally millions of lives (to be saved by autonomous vehicles) at stake, I was hopeful that the NTSB report would conclusively provide an answer. It did not. It did however present a lot of very strong evidence.

You and I are effectively jurors deciding this case, the outcome of which will guide our thinking about autonomous vehicle development. The defendant is Uber ATG’s autonomous vehicle research. Were the management and engineering guilty of wrongdoing which caused this fatal outcome? From the news publicity, it doesn’t look good for Uber. Let’s review the case more carefully.

(Travis Kalanick is very happy that) the report says, "ATG’s decision to go from having two VOs in the vehicle to just one corresponded with the change in Uber CEOs." The acronym "VO", for "vehicle operator", pops up a lot and is central to the case. The facts are that ATG previously used two safety drivers (VOs) per car and reduced that to one prior to and including the fatal incident. This is bad. With two safety drivers I am confident that this crash would not have happened. But it is also understandable. Why not three VOs per car? At some point there are diminishing returns and if they think the car is near ready to go with zero humans, using one could seem adequate. It’s not like Tesla’s autopilot requires a passenger to be present.

Next on the list of very damning evidence is the engineered distraction for the VO. The Vehicle Automation Report says, "…ATG equipped the vehicle with … a tablet … that affords interaction between the vehicle operator and the [Automated Driving System]." Basically ATG replaced the Volvo’s infotainment touchscreen system with something similar but specific to the mission. Requiring a VO attend to a screen while a 4800lb death machine runs amok in public is categorically a bad idea. We know this because Waymo uses a voice system to accomplish the same goals. However, almost all of the evidence led me to conclude that the system was actually no more distracting than the stock Volvo system. Note that I’m not absolving that! But it clearly wasn’t radically worse than what the public accepts in general.

I said "almost" all of the evidence suggested that the onboard computers were compatible with a safe outcome. Some of the most interesting evidence is from the investigation interview with the VO. Who obviously has a lot to gain by establishing that VOs were put in an untenable situation by design.

Indeed, according to the interview, the VO felt like the incident issue tracking system itself had "issues" and "…believed it was because the linux [sic] system did not work well with the Apple device."

Let’s look at a passage from the interview report that focuses on this engineered distraction.

6g. She stated that prior to the crash, multiple errors had popped up and she had been looking at the error list — getting a running diagnostic. 6h. She stated that when a CTC alert error occurs, she must tag and label the event. If the ipad doesn’t go out of autonomy then she has to tag and label the event. 6i. Her latest training indicated that she may look at the ipad for 5 seconds and spend 3 seconds tagging and labeling (she wasn’t certain this was stated in written materials from ATG).

She wasn’t certain it was officially documented because that is madness! Anyway, she’s painting a picture of a distracting technical interface associated with the project. That would definitely be bad.

It gets worse. Section 6f says, "When asked if the vehicle usually identifies and responds to pedestrians, the VO stated that usually, the vehicle was overly sensitive to pedestrians. Sometimes the vehicle would swerve towards a bicycle, but it would react in some way." Well, that seems on topic! At least to me! I believe I was the first uninvolved observer to notice that it seemed like the car steered into the victim. This has now been conclusively confirmed. It certainly doesn’t seem good for Uber.

The interview report makes no mention — not even questions that linger — about the VO doing some distracted stuff on her phone. It in fact accepts for the record in 6k that "…she had placed her personal phone in her purse behind her prior to driving the vehicle. Her ATG phone (company provided phone) was on the front passenger seat next to the [official mission related] laptop." Phone records showed that there were no calls or texts on the VO’s personal phone while on duty.

The report concluded that the VO does not drink alcohol and police found no reason to suspect intoxication of any kind. (The NTSB’s preliminary report couldn’t help but point out that, "Toxicology test results for the pedestrian were positive for [stuff which is none of our damn business]." This is how it should be. Should this impaired person have been driving? No. But she paid for that good judgement with her life.)

Uber ATG seems to have set up a bad system designed to impair a sober driver’s ability to respond to problems. The fact that I could not find in the report a vigorous defense of these systems, nor could I find any of the logged event data from the operator’s interface, was not encouraging. At this point in the story, it’s looking bad for Uber.

However! Whodunit mystery stories can change direction quickly. Often the guilty party is actively trying to cast suspicion on a scapegoat.

I said that I did not find a vigorous defense by ATG, but I did find this in the report, "For the remainder of the crash trip, the operator did not interact with the [official ATG vehicle management] tablet, and the HMI [human machine interface] did not present any information that required the operator’s input." I’m presuming that the NTSB were provided with logs and data to support this pretty definite conclusion. If this is true, it certainly makes the driver video, where she is looking at something, much more suspicious. It does leave open the possibility that the system was producing optional messages that, while not "requiring the operator’s input", were distracting.

The Washington Post reported back in June 2018 that local police say the VO was watching "The Voice" on her phone at the time of the incident. I commented on that report at the time, but it felt unofficial. I was hoping that this NTSB report would have better details to make this evidence incontrovertible.

In the Human Performance Group Chairman’s Factual Report section 1.3, "According to the police report, … she picked up a gray bag and removed a cell phone with a black case. She exited the facility and parked in the adjoining lot, where she then focused most of her attention on the center console of the SDV, where an Uber ATG-installed tablet computer was mounted. At 9:17 p.m., the VO appeared to reach toward an item in the lower center console area, near her right knee, out of site of the dashcam." They then go on to show photos of where they surmise an entertainment telephone may have been positioned.

Frustratingly the tone of the NTSB report lacks certainty about where the VOs attention was during the mission that ended in the crash. However, they do not contradict the June reports of video watching. In section 1.6, "Search warrants were prepared and sent to the three service providers. Data obtained from Hulu showed that the account had been continuously streaming video from 9:16 p.m. until 9:59 p.m. (the crash occurred at aproximately 9:58 [9:58:46.5])."

And that’s where the facts mostly end.

The vehicle operator was distracted and an undistracted operator would have prevented the fatality.

  • Uber ATG says their systems, which presumably are accurately logged, were not doing anything requiring distraction.

  • The VO says her personal distraction machine was safely in her purse, presumably butt-dialing Hulu to play The Voice.

I was unsatisfied with the plausibility, however remote it might seem to you, of two contradictory explanations. There is a lot at stake here. In the end, we have to make a decision about the most likely version of the truth.

There is one more source of evidence that we can look at. The NTSB helpfully provided a data file of human annotated labels of where the VO’s attention was during each half second for the 40 minutes prior to the crash. The plot of the VOs attention in the report was muddled and unhelpful. I wanted to have a proper look at it so I figured out how I could neatly visualize the state of each data point — no summarizing statistics, just the entire raw data as provided. As with many of life’s problems, the answer was ascii art. In this plot an "_" means the VO was looking down at something in the car and a "+" means that they were looking up at the road properly. The number is the minute after 21h00 on 2018-03-18 and each of 120 characters per row represent every half second of that minute.


We can see from these investigator annotations that the VO is horrendously distracted! The question we must all answer for ourselves is this: Does the character of this distraction suggest interaction with a technical system designed to log unusual mission events, or watching TV?

To make sure we integrate all knowledge possible, I have included highlights at the points where the Onboard Image & Data Recorder Group Chairman’s Factual Report makes additional specific text annotations about the video of the operator. Those annotations are the following.


"VO drank from soda-like bottle."


"VO drank from soda-like bottle."


"VO smirked."


"VO turned head and smirked at another Uber Technologies vehicle while stopped at a stop sign."


"VO smirked."


"VO yawned."


"VO yawned."


"VO laughed."


"VO smirked."


"VO appeared to be singing."


"VO appeared to be singing."


"VO yawned."


"VO was singing in the manner of an outburst."


"VO nodded no and then yes."


"VO nodded no."


"VO yawned and then smirked."


"VO yawned."


"VO drank from a soda-like bottle."


"VO reacted abruptly while looking forward of the vehicle."


"Impact occurred."

Everyone needs to make their own judgement, but my conclusion is that the vehicle operator was watching TV. In the report Uber ATG documents many commendable safety improvements, but solving the nonexistent problem of distracting the driver for 49.4% of the time is not one of them! I believe that is because it is a spurious false problem. Uber’s claim that their logging system did not log any events or distract the driver prior to the crash seems more reliable than the implicit claim by the VO that the video streaming to her phone was not being watched and smirked at. (Interestingly I used the exact word "smirk" last year in my review of the limited final 13 seconds of pre-crash driver footage released to the public.)

I don’t need this woman to be burned at the stake or even go to jail. What I need her to do, and that’s why I’m doing this, is to exonerate autonomous vehicle research efforts. This all too typical idiot driver was unlucky enough to explicitly kill a vulnerable road user. That of course bothers me, but I’m more concerned about the thousands she may yet kill, statistically, by retarding progress toward a world devoid of idiot drivers like her. She is, as I mentioned previously, a perfect advertisement for exactly why autonomous vehicle research is so important.

UPDATE 2019-11-17

Part 1 explains the fatality. It was caused the same way that most automotive fatalities are caused — by an idiot human driver. For a look inside the specific autonomous technology problems the Uber engineers are working on, see my part 2 which goes into some more detail about the technical challenges.

UPDATE 2019-12-12

If you’re interested in this kind of thing, the next step is to check out Brad Templeton’s very thoughtful article on self-driving vehicle ethics. Those of us who have studied the Uber crash know the cause was a negligent human driver. What are the broader questions society needs to answer about risks involved with ultimately lowering overall traffic risk? Brad, as usual, is very sensible on the topic.

Snow Boating

2019-11-08 15:01

Here I am doing the final check for the final regular autonomous boating mission of the year.


And here’s what that looked like from the other side.


It was actually a fantastic day out on the water. Superb scenery and data. Although I would like to keep going out, it turns out that stern drive engines are strangely well insulated from the relatively warm water they might be sitting in, meaning that leaving such a boat on the water at this time of year without complex shore powered heating is likely to crack the engine block. Not the first time my gear failed to match my cold weather enthusiasm.

I feel like this boating season has been quite a success and I am now looking forward to the off season when I can focus more on the system architecture and AI voodoo.

Coolest Boater

2019-11-04 04:40

Yesterday morning @dpbuffalo kindly tweeted this photo.


Yes, it’s cold, but I like that sort of thing. At least I can just sit there with my hands in my pockets while the boat completely drives itself the entire route.

Here is the photo I took very near that spot in the reverse direction.


Notice the excellent clouds, lighting, tree conditons, water textures, lens flare, etc. The reason I keep pushing to go out is to collect fantastic data like this.

Here is a dramatic scene while the boat drives out of the marina.


Less than four minutes later, the lighting and water texture are quite different.


(This may be a good time to consider the profound difference bewtween stupid looking hats and stupid hats.)

Also shown is my favorite camera, a fun little side project which I built out of trash left over from some GoPro packaging plus microscope slide plus Raspberry Pi plus Linux. Unlike a GoPro, it is rain proof while connected to power. You can imagine that being useful to me given the weather I go out in.

Here’s a sample from a couple of days ago in the rain.


And my trash camera floats! This makes it easier to retrieve valuable data if it ever goes overboard. A similar consideration is also wise when choosing hat color!


For older posts and RSS feed see the blog archives.
Chris X Edwards © 1999-2020