Here’s an interesting article about autonomous cars called Google’s video recognition AI is trivially trollable. Wait, you say. That doesn’t sound like it’s about cars and it seems like the article does not mention cars, autonomous or otherwise.
That’s what I’m here for! To fill in that blank. The essential point of the article is summed up nicely in this quote.
Machine learning systems are typically designed and developed with the implicit assumption that they will be deployed in benign settings. However, many works have pointed out their vulnerability in adversarial environments.
I am in total agreement with this. It reminds me of the history of operating systems and networking where everyone was thinking, "Wow! This is so cool! I can share everything with all of my cool new friends." And those "friends" eventually turned into cryptocurrency pirates.
I am confident in my lack of faith in an exclusive machine learning solution for autonomous cars. The reason is that not only does some random unlucky circumstance need to happen organically to cause problems. Even the most remote and obscure problematic circumstance can be cultured with an adversarial AI or even fuzzing. And this is the kind of thing security researchers love to pick at.
I’m currently trying to use RNNs to analyze protein peptide sequences; adversarial meddling isn’t such a problem with that. But if deliberate human actions are involved in the data sets, as they are with uploading videos or driving on public roads, then assuming the worst is not foil hat nuttery. Even a tiny amount of computer security experience will make that clear enough.
UPDATE
If you’re quite interested in the technical details involved in the security ramifications of machine learning, here is an excellent talk on that exact subject.
UPDATE
Here is a superb article covering recent research into adversarial situations against neural networks.
UPDATE 2017-08-24
Here’s a nice paper show how minor changes to a stop sign can make a typical classifier think it’s a 45mph sign.
UPDATE 2020-03-20
A Twitterer posted this this
Astonishingly, no known machine learning system can reliably tell a bird from a bicycle when there’s an adversary involved. My colleagues and I have proposed a contest to see if we can change this.
About this project: https://ai.googleblog.com/2018/09/introducing-unrestricted-adversarial.html
I like how bicycle (and bird) is their primary difficult-to-secure image of interest. Well, obviously I don’t like this. But it’s fun to see since I’ve often claimed that an all-machine-learning autonomous driving system would fail when it confronted me and my adversarial bicycle.