Here’s an interesting article about autonomous cars called Google’s video recognition AI is trivially trollable. Wait, you say. That doesn’t sound like it’s about cars and it seems like the article does not mention cars, autonomous or otherwise.

That’s what I’m here for! To fill in that blank. The essential point of the article is summed up nicely in this quote.

Machine learning systems are typically designed and developed with the implicit assumption that they will be deployed in benign settings. However, many works have pointed out their vulnerability in adversarial environments.

I am in total agreement with this. It reminds me of the history of operating systems and networking where everyone was thinking, "Wow! This is so cool! I can share everything with all of my cool new friends." And those "friends" eventually turned into cryptocurrency pirates.

I am confident in my lack of faith in an exclusive machine learning solution for autonomous cars. The reason is that not only does some random unlucky circumstance need to happen organically to cause problems. Even the most remote and obscure problematic circumstance can be cultured with an adversarial AI or even fuzzing. And this is the kind of thing security researchers love to pick at.

I’m currently trying to use RNNs to analyze protein peptide sequences; adversarial meddling isn’t such a problem with that. But if deliberate human actions are involved in the data sets, as they are with uploading videos or driving on public roads, then assuming the worst is not foil hat nuttery. Even a tiny amount of computer security experience will make that clear enough.

UPDATE: If you’re quite interested in the technical details involved in the security ramifications of machine learning, here is an excellent talk on that exact subject.

UPDATE 2017-08-24: Here’s a nice paper show how minor changes to a stop sign can make a typical classifier think it’s a 45mph sign.