Chris X Edwards

Of 2017's 112 days, expected 56 warmer than avg historical high. Got 68. Should be 56 cooler days than normal lows too. Only 15! (NOAA-KSAN)
2017-04-22 14:20
I saw a lot of ad hominem ridicule of Trump at the MarchForScienceSD this morning. So sad. We can do better. Should be ridiculing Pence too.
2017-04-22 13:08
Problems you didn't know you had: space fission. Space debris collides, explodes into more debris. Soon you have the Wall·E launch scene.
2017-04-21 14:32
Hard to say who's the bigger jackass, the guy whose vehicle is so loud it sets off a car alarm, or the owner of such a sensitive alarm. Tie?
2017-04-21 10:05
Ignoring any science news article containing the word "could" in its title _could_ be a practical time saver, scientist believes.
2017-04-20 10:02
Etc.
--------------------------

March For Science SD

2017-04-24 07:59

I just came across this photo of last Saturday’s March For Science in San Diego.

science.jpg

I wonder if anyone caught the significance of that the huge banner on the Civic Theater advertising Verdi’s La Traviata. Spoiler alert: the main character dies from a lack of science! Specifically a lack of a TB vaccine.

I’m sure someone noticed. Although San Diegans aren’t generally famous for their intellects, the ones who are were probably in this picture.

The Autonomous Tricycle

2017-04-22 15:09

Earlier this year I described how autonomous vehicles could be usefully deployed today with no further improvements in technology. And in another article I proposed a completely different way that autonomous vehicles could be usefully deployed with today’s technology. Not only could they be deployed with today’s technology, they could be deployed with 2006 technology. Not only could they be deployed, they should be deployed.

To press the point, I have yet another idea for how autonomous vehicles can and should be deployed today using technology that was available in 2006. I can summarize the strategy in 2 words: minimize energy. Let’s look at what that means and why it might be a superior solution to the ones typically being considered.

When I’m riding my bicycle along the beach boardwalk and mixed use recreational paths near where I live I am extremely aware of my responsibility and liability. If a drunk person jumps out and collides with me knocking me off my bike, it’s easy to know whose fault it is. It is my fault. It is always the cyclist’s fault when interacting with pedestrians. (Just as it is always the cars' fault when interacting with cyclists and pedestrians - but that’s another topic to be argued at another time.)

Here’s a frightening situation I was in over 25 years ago that I still vividly remember now. I was cycling next to a line of cars mostly stuck in dense traffic. Just as they ordinarily pass me in their perceived space, I was now passing them in mine (legal for motorcycles to do in California). Suddenly the car I was next to and overtaking slammed on its brakes. As I rolled in front of it I realized what was going on. There was an adult crouched down on the far side of the road encouraging a small child on the near side to run across. Although I have good visibility, I didn’t see the adult because they weren’t standing and the car I was passing was in the way. I didn’t see the child because he was in between two parked cars and short. The adult was telling the child to go because they couldn’t see me (and they were being stupid). Well, that kid can thank the fact that on that day I had superhuman powers of bike control. As he sprinted right toward my back wheel from point blank range, my attention focused only on the child and keeping my bike from making contact. I don’t know how I did it, but I unloaded an extra few kilowatts and dropped the back wheel fast enough that he just missed me. I even somehow managed to not wreck myself.

Here’s what’s important about that story now—in 250,000 km of cycling I’ve almost been slain dozens of times but that is the only serious close call I’ve ever had where I was culpable. I could have killed that child but I think it’s more likely that I would have given him a nasty mauling. I’m not even going to say that I’ve always not been stupid when cycling around pedestrians. The important thing to think about here is that no matter whose fault it is, the most severe collision I can think of between a cyclist and a pedestrian is much pleasanter than the most mild collision I can think of between a car (at speed) and a human.

What does this have to do with autonomous vehicles? Currently the leaders in the AV field are selling Teslas, Mercedes, Cadillacs, etc., huge massive cars with enormous power. What if we reconsidered the list of problems involved in designing an autonomous vehicle by limiting the vehicle to use the same order of magnitude of energy as I do on a bicycle. (For reference my 1 hour PR is about the range and speed of recent electric bicycles, around 40km.)

With that inverted perspective suddenly the situation is radically different. Sure you could hit some person or dog or other unexpected soft thing, but the severity would be so much less that it might not need to predominate all other considerations. With limited energy you won’t be going 80mph. You won’t be towing a boat. But I can imagine a useful very small light vehicle that can travel at bike speeds. Autonomously.

I’ve been thinking lately about building my own autonomous vehicle. Rather than dismantle and modify a crushingly expensive real car only to find that it is an intractable menace to humans, I’ve been thinking about a tricycle. It could be a dorky looking thing someone’s grandmother might ride or something fancier. But the idea would be to combine it with an electric drill (essentially) and a laptop. Ultimately the laptop could do double duty autonomously driving the rig while entertaining you.

Where could this be used? Why would this make sense? Well, it’s kind of a tricky gray area right now. Although I rode one in the Swiss Alps in 2001, electric bicycles are just recently a major thing. I have mixed feelings about this. Another word for a motorized two wheeled vehicle is "motorcycle". But that’s exactly what I’m arguing here - it’s good to explore different regimes of power and weight physics than we’re used to based on old technology constraints.

If you believe in electric bicycles as bicycles, then the nation of Holland could be transformed today into an autonomous vehicle utopia based on the fact that they have already properly insulated humans from their dangerous car infrastructure.

The concept is this. You should be able to climb on your autonomous tricycle and although cool people will be laughing at you the whole way, smart people will notice that you’re getting work done on your commute. Maybe your commute would take longer, but I’ve always argued that focusing on an obnoxious task for 10 minutes is much more wasteful of one’s precious time than doing whatever you want in a confined space for 20 minutes.

Folks, let me tell you something as someone whose life depends on being an exceptionally keen observer of this fact: the killer app today is being able to text while driving. It is not just figurative; people will literally die to be able to do this.

Although you probably were thinking autonomous vehicles were going to look like this…

tesla.jpg

…I’m suggesting keeping an open mind to having them look like this.

trike.jpg

One of the added bonuses of a low-lethality approach is that it bypasses regulatory challenges. It is approachable by real entrepreneurs, not just lucky rich guys. It can be augmented with solar power and (gasp) human power. It’s a massive improvement on the traditional approach in terms of energy usage and pollution. It optimizes for the absurd problem of most solitary commuters driving a wasted capacity for 5. It is easy to correct if it gets into difficult situations (ahem). It becomes a better idea the cheaper batteries, computers, and sensing hardware become. Building custom infrastructure for such low energy vehicles is orders of magnitude cheaper and easier. Although this would work magnificently in Holland, it actually has the greatest potential at the opposite end of the infrastructure spectrum in the third world. The absurdly rich have had autonomous vehicles for millennia; a low barrier to entry opens the possibility to poor people, motivated people.

The goal is not to develop a car like KITT or Optimus Prime. We do not need cars to be so human that we can be friends with them. I sometimes feel like that’s what AI researchers are going to be stuck working on forever. What are the real goals?

  • Transportation - Point A to point B.

  • Safety - Lower the horrific mortality baked into the current setup.

  • Value - Free up time, use fewer resources, lower TCO, etc.

What is not on that list is "look like a normal car". Instead of taking the physics profile of a 2000kg car and wondering how we can make it drive like Roy Batty, why not start with the physics profile of a bicycle and ask what is required to achieve the goals of autonomous driving?

The Magic Of Recurrent Neural Networks

2017-04-11 22:28

I do work for a computational structural biology lab and one related topic that I’ve become somewhat interested in is protein secondary structure. The super quick summary is that your body (and all life that we know of) has DNA encoded as a blueprint for what to build. You would think that by looking at the DNA we could tell what sort of 3d physical biomolecular machines would result. But we can’t! It’s a huge unresolved problem.

To understand secondary structure, imagine a skein of yarn where the yarn can change colors ever millimeter so that its position can be encoded in binary. In simpler terms, you can tell what exact part of the yarn you’re looking at anywhere along it. Now you want to know for mm number N, will this be part of the yet unknit sweater on the outside or the inside? Note that this is a lot easier than asking which part of the sweater (sleeve, neck, etc) will it show up on? You could imagine something easy where it looks like this: in, in, in, out, out, in, in, in, out, out, etc. Protein secondary structure is a bit like this problem only imagine the yarn after a cat has thoroughly tangled it and we want to know, at mm N is it in a big knot? Or is it in a dangling loop? But you can see that if a section was just in a big loop, it might still be in a loop (statistically) or after enough of that, it could be time for it to be expected to transition to a big knot.

With proteins, the yarn is not homogeneous. Each little segment is a single amino acid comprising the polypeptide chain. There are 21 different flavors of amino acid. In my sweater example, there is an inside that touches your body and an outside. In protein secondary structures, there are three major categories, alpha helices (think old telephone handset cords), beta sheets (the main ingredient of silk to loosely stitch together my sweater analogy), and what I like to call, "other". There are other classifications schemes that are fancier, but let’s start simple.

The problem then is this. Starting with an amino acid sequence (converting from DNA to amino acid is not complicated), what sorts of loops and turns can be expected from each part of the chain even though it is unknown how those loops will be arranged in the big picture? (Sweater? Hat? Mittens? Don’t know that.)

Can it be done? I felt like the answer should be yes. It seemed to me like this was a similar problem to machine translation of human languages like Spanish to English. In fact, here is a much closer analogy. Imagine you had a huge body of text and some linguistics researchers had laboriously annotated every part of speech of every word, where "P" is pronoun, "V" is verb, "A" is adjective, "R" is preposition, etc. For example.

thereissomethingmagicalaboutrecurrentneuralnetworks
PPPPPVVPPPPPPPPPAAAAAAARRRRRAAAAAAAAAAAAAAANNNNNNNN

If I had a huge quantity of such annotated sentences, could I train a computer to tell me, what’s the part of speech code at position N of a new novel sentence? The fact that machine translation exists tells us that the answer is probably yes since this problem seems easier. Here is what the training data looks like for protein secondary structure. The top line is the amino acid (e.g. "W" is glutamate, the main stuff of MSG). The bottom line is the secondary structure codes. "H" is alpha helix, "G" - 3/10 helix, "I" - pi helix, "E" - beta strand, "B" is beta-bridge, "_" coil, etc.

TPDCVTGKVEYTKYNDDDTFTVKVGDKELFTNRWNLQSLLLSAQITGMTVTIKTNACHNGGGFSEVIFR
__B_EEEEB_EEEE_____EEEEE__EEEEE__GGHHHHHHHHHHH__EEEEE_______EE__EEEEE

It brings up an information theory question. Did evolution devise something much more horrifically complicated to encode semantics in our genomes and proteomes than our cultural evolution did with our spoken language? I’m not ruling it out, but I’m also not immediately thinking of a reason why that would be likely to be true.

In researching this topic I discovered that I wasn’t the first person to suspect that this problem could be solved better with machine learning than other approaches. Indeed, here is a paper from 1993 which attempts to apply neural networks to this problem. Of course the tiny little model that was used was adorably pathetic by today’s standards. It is the most basic neural net architecture right out of an introductory textbook. I suspect that such an architecture will not make any sense. And this paper only trained on about 126 sequences. That’s really not enough for any sensible thing. By comparison, I have 1400 ready to go right now with between 50 an 70 AA (selected to minimize length variation). I can easily get 100k more from the PDB. Instead of two tiny layers, I can have a dozen with 1000s of neurons each to really give my model a good shot at global feature detection.

Over the subsequent years, many other papers have been published and the complexity and accuracies have steadily gone up. Although I hardly understand all the technicalities of machine learning, when I first heard about recurrent neural networks I thought it sounded like a pretty good technique for the secondary structure problem. And sure enough, it has been tried (last year) and it seems to be the way to go. That paper does seem like they really went crazy with the model complexity. I am curious to know if that is really necessary, but I’m still working out how to implement this myself. And find time to!

That’s the set up. Why do I think RNNs might be useful? They tend to do well with matching input sequences to correct output sequences and that’s the exact requirement. They allow the model to be able to remember features from earlier in the sequence which may be useful later on. And, well, they seem kind of magical.

The basic concept of a recurrent neural network is that as the system processes input, the outputs are fed back into the system. The system is tuned to balance this process. First, it selectively accepts new valuable information from the inputs. Second it discards old information that seems to be ineffective. This is a massive simplification. There is fractal complexity at every turn and the tuning of the information gates can be quite baroque involving quite intricate circuitry. This incredibly well-crafted and clear lesson on the esoterica of RNNs is worth looking over just to see how to teach a difficult topic.

More research uncovered another exceptional resource. Stanford CS professor Andrej Karpathy produced this fantastic article, The Unreasonable Effectiveness of Recurrent Neural Networks. He backs up that sentiment with some amazing examples of what a (relatively) simple RNN can do. For example, he trains on the works of Shakespeare and is able to synthesize ersatz Shakespeare which I can barely discern from the real thing.

He synthesizes Wikipedia content, C code, and LaTeX from a math book that all looks shockingly real. The amazing thing about this is that it isn’t combining words or phrases or other high level tokens. This is generating plausible content based on characters.

Not only does Karpathy have the goods to do miracles with RNNs and then write lucidly about it, he provides source code to try it yourself. I must stop here and acknowledge that there are a lot of idiots in the world and this includes an astonishingly high percentage of computer programmers. I am typically pretty horrified with the code of other people, but I want to point out that I am capable of appreciating brilliant code when it makes its fleeting appearances. And Karpathy is a genius. Check out this brilliant program. Short, clear, and requiring no annoying hipster dependencies. Not only that, but even I got it working!

I was able to train it on the corpus of all my blog posts, about 500kB. When the program starts this is what comes out.

^:M4OgbRFYM-B1M-^XGA7+BM-^Y/D;qQ}e&GEg-MW!M-^]a';XM-^OM-^T@M-^@YW2M-b^
C1cl(@&`M-C\}nM[:%F1M-^\vF;M"^VM&%tyyM-^OG^hYCM-^\9R:4$DF?Gr{lI|y#R*p7
SM-^WM-bM-C`M-^X.9>>7~M-'$
 3dam/TTv5uM-^]!LcuM-,#"LM-^Xls8$
 B" XM-CZM-
^Xc.d1M-^YM-^BZ:^U,M-^\zj`Vp,t\l)S(Ft@Z6Oq'1lvdQG5[M-^O}Q?-Us $

As you can see this is essentially pure noise. The guesses about what letter come next are essentially meaningless. This could be the beginning of something trained in Chinese.

But quickly, about 10 seconds later, this starts to appear.

g ]enecaatarpo nhtt3rle7arg r-wsor$
s AtM-Bu[-U/Ik : =s cg7srhpijeh)shsis s I7-. tess,=si#Yholef Lhoyg$
s feopMpcserhefBnrel=Is:n (Ne vWhte-P woonnbuk]H^CtTwot.che$
mile.e44ondviheioK_Wo $
ual te ct77lited me 3ywto-:_slOwh l aw _bM-^@A [onaavy hesbik,t slt tilbesssm
nnpg$
lfe7 bandnd_ptos$
2>yo i f cttaiw1ee0s $
atcrR saD$
no s aulvhel+nnabtucilscip ycas$
othedomdei.eni tP 5aord i nsson pionc ely or.o=minn _ilr me bouroeil niulasum$
ibgat tonle msrnplaomyde fyqh lh__=op tosFaorses linl toen/awertiltohyaainl $

Clearly this already is not Chinese. Dutch maybe? After leaving it all night, I scraped off the last megabyte of output and grepped the word "Microsoft".

j0 cing, I devaving navery of this.no of to completit Microsoft
ig lot an a him astoplest to idn't that on Microsoft which
machinastly, thot have beint.
hat toisien mewebles in a duch whokn your by Microsoft cisumenced
metwiously Microsoft and maker, you andly all can it ideally and un
This that's Microsoft that Pymase the everywny would see famed from
Microsoft" is say
log/2206==/xe-Zy.com/max-forhinct-radare/peg[Miczor menter thans, is
Microsoft,
work Microsoft
gyeard that may this the of Microsoft not preater that oped
== wwa be you Microsoft. Bat Mich mishon; the transiling a be of the
interrentever Ararent have centloven bict and to me of Fur
Anjengor,  Heruend or about of - 10 or bit swould when, Microsoft
intereringer hat, benowary too
Microsoft "dolen/[book starre. My and
of at create Microsoft owight the drive smal(2.1332052/toxmLM//APre
that is worghow, swad the treasy
with so Microsoft]. I bear on justion, Gen, take off a web that I was
right will trickelt attemut how cutonced
worager some of conton do the presic learnise" in Microsoft a security
is such,
Microsoft shed dad verious g
haur * Cryside fill or quit us/angity to somerrowal_disive tays Goumen
and the spire watt on they can it Microsoft and the worme saprint as
beatir what bade itefend), Or Incless to do you've di
Microsoft annarattich
them)! I was ablest a that. It was need to Microsoft and use
environity_
n soidty Microsoft place othel of my besiverded-ranely Waz suc,
(lavents. It dilligid I've

As you can see it correctly noticed that I have a special interest in Microsoft. But consider that it didn’t just find Microsoft in the corpus. It learned about that by reading it. It learned to spell it and capitalize it. The rest of the text is pretty nonsensical, but it looks ok. From a distance it seems more representative of English than lorem ipsum. Certainly if I were planning generic page layouts of my blog, I’d use this text.

Another interesting feature this model picked up was that I like to link to Wikipedia. Here are some examples.

https://en.wikipedia.org/wikile=m/Fuctine/disting--ports." Herestive,
wikipedia.org/wiki/Dowemer-tho-[Ad3s. Pais I read to supperation of
most any midefining staccion

 s in every to riss "veres dasf extorty the stroge to insoftheiging
 bely aysned by I'm pomelarn drea to actually for this shill
 https://en.wikipedia.org/wiki-tomm[Bal tin's you've a open in a if
 vastly

https://en.wikipedia.org/wiki/Todinefullws.com/pitn.com/Se=ftm* be
engy. I emalle

https://en.wikipedia.org/wiki/Tougers_Rowinav[just:ikn:/g/20.asching.jpg[hyperabb

https://en.wikipedia.org/wiki/Tongel-thided_limailtunteracting.html[66
sisuate] ackird wwats mabion to at're)ds al and

https://en.wikipedia.org/wiki/Ponters-bubst.com/Mivi1.html[resebulary up

"https://en.wikipedia.org/wiki/gure(Lity definiteter
_________________________dopecoPrywgrmlesm=0/Vige: If yess, perprial

https://en.wikipedia.org/wiki/As-simolly

https://en.wikipedia.org/wiki/Metive-conortent/rep.boum.

n soidty Microsoft place othel of my besiverded-ranely Waz suc,
(lavents. It dilligid I've
https://en.wikipedia.org/wikile=m/Fuctine/disting--ports." Herestive,

This is pretty astonishing. As you can see it gets the beginning right reliably, but when the specifics are required, you can see where the system transitions into its limitations. It often even tries to follow the URL with the bracketed anchor text (which is the Asciidoc syntax I use).

All good fun. Maybe one day I’ll figure out how to apply this powerful magic to the secondary structure problem. Others are more ambitious. The modern on-line assistants like Siri and Alexa are using RNNs, among other tricks. The idea behind an RNN chat bot is that the input is treated like a sequence and over time an output is learned. This works best on a limited domain situation (resolving simple tech support problems, for example). If you train it on a zillion requests made by customers and what the corresponding sequence of the call center’s response was, that, apparently, is kind of sufficient for the model to synthesize new answers to similar problems. Pretty freaky really.

Autonomous Vehicles - A Walk In The Park

2017-04-09 15:38

From the perspective of the autonomous vehicle industry, I hold two iconoclastic opinions that would seem contradictory.

  1. Autonomous cars are not happening in the foreseeable future.

  2. The technology for successful autonomous vehicle deployment in real commercial situations was ready about ten years ago.

That’s very confusing isn’t it? How can autonomous cars not have a plausible future if the technology was ready a decade ago? Yes, that is exactly my question. I have covered why I (and many industry leaders) do not believe that autonomous vehicles are coming soon. How exactly then can the technology be ready now and have been patiently waiting for a decade?

In my article, Autonomous Car Plan Without Supernatural AI, I propose a very valuable economic and technical model for an autonomous car venture that could have started years ago with no special undeveloped technology.

Recently I thought of another obvious application of autonomous cars that is long overdue - parking.

Amnon Shashua, cofounder of Mobileye, recently gave a brilliant talk at MIT on the state of the art of autonomous vehicles. His talk is called "The Convergence of Machine Learning and Artificial Intelligence Towards Enabling Autonomous Driving". Pretty fancy! He talks about all kinds of very tricky problems. At 8:15 he divides the problems facing autonomous driving into three categories: sensing, mapping, and driving policy. None of these things are required for autonomous driving. None. Let’s explore how (even 10 years ago) cars can drive by themselves without complicated driving policy issues, without mapping, and incredibly, without any kind of sensing whatsoever. Perhaps the first rule of moving forward with autonomous vehicle technology is to look at what AI researchers are finding difficult and specifically avoid that problem.

Back to parking. If you drive to a store, sporting event, whatever, inevitably one of the most annoying aspects of your whole day will be parking. James Bond’s cars sporting machine guns is far more plausible than some of the parking spots he lucks into. The thing that is very special about the parking problem is that it is not on public roads. Many people in the business believe that changing the infrastructure to accommodate automotive autonomy is absolutely unthinkable. It simply can not be done they believe. But (in addition to too many obvious examples to mention) it obviously can; if it could not, by the same arguments, there could be no parking lots.

If some private developer can build acres of drivable asphalt in the first place, then I’m pretty sure they can make minute adjustments to help out autonomous cars. And it wouldn’t really take much. All that is needed is a drop off area and a dedicated roadway (one lane could suffice) to some special parking area. All the cars need is drive-by-wire and wifi. They don’t need mapping or complex policy AI to interact with humans in parking lots. Once the owner of the car gets out, the car begins its unmanned autonomous journey on private roads that are guaranteed to be clear of anything complicated. See how I just got rid of 90% of problems in autonomous driving?

Let’s get rid of another 9%. (The last 1% we can do!) We don’t even need sensors. Seriously. The cars can be blind. All we need are stationary surveillance cameras. You could say that putting cameras in parking lots would be too expensive, but I’m not talking about 1906, I’m talking about 2006 when such cameras were so cheap that they showed up in $30 flip phones. The cameras can cover, redundantly if you like, the entire journey the car is expected to make on its own. All of the computation for the system can be in proper computers that don’t need to be squeezed into a wattage cars can sustain. The car makes an SSH connection to the central control. The central control plans a trip and starts giving the car low speed instructions on getting to a parking spot. The control watches to make sure everything is going well. If it doesn’t, we have an insurance claim and not trolley philosophers.

At this point, the people in the autonomous car business may try to make up some stories for the purposes of confirmation bias. They will posit—if it can be done so easily and we haven’t done it, there must be a reason. There is only one just-so story I can think of: nobody cares. Because the auto industry and computer researchers have not solved this problem, it must not a problem that needs solving. But this is extremely wrong.

This DOT report tells us this astonishing fact about car accidents that occur in parking lots.

…the information on [nontraffic crashes was not] available until 2007, when Congress required National Highway Traffic Safety Administration (NHTSA) to start collecting and maintaining information pertinent to these events.

Wow. That must be because such accidents almost never happen and parking lots are so safe, right? Keep reading that report. There were, on average, 1621 annual deaths in parking lots between 2008 and 2011. And we’re just getting around to keeping statistics on this? Throw in the 91,000 injuries per year and I’m pretty sure there’s going to be a lawyerly amount of money floating around to clumsily mop up the mess post hoc. That expense is passed on to consumers in the form of insurance and legal costs. Even if you are in favor of boosting for lawyers and insurance agents, surely society would be functioning better without 91k unnecessarily broken people per year.

But even if that sort of medieval warfare kind of horrific violence doesn’t bother you there are other things to consider about parking. It sucks not just for drivers. It sucks for everybody. This article in The Economist is a great place to start when thinking about the parking problem. There are so many externalities and hidden costs to subsidizing parking (like we do). Just the pollution from people driving around looking for parking is quite non-negligible.

If you want to tell me that a city taxi driver’s job is still essential, fine. But a parking valet? Come on! If so, then we’re never going to see any real autonomous cars ever. So let’s start with this fruit that’s not just low-hanging; it has dropped right into our shopping basket!

The real reason I’m keen to see it isn’t that I give a toss about parking personally. My family understands that I like to proudly park in the usually empty section reserved for athletes (sort of the opposite of the idea behind handicapped parking). What I think is important about pursuing this is that it can be a success story. It can be real, not vapor. It can make a real ROI for people working on autonomous vehicles. It can comfortably create a track record of safety. It can acclimate people to the idea. It can spur car makers to get on with some progressive hardware. It can generate new ideas about what is possible and where improvements can be made in real commercial situations. Before climbing Mt. Everest, try something hard but easier like Mount Cook; Sir Edmund Hillary wisely did.

Update: The very competent civil engineer points out that this kind of thing is commercially viable.

Anti-Troll Brakes

2017-04-04 13:42

Here’s an interesting article about autonomous cars called Google’s video recognition AI is trivially trollable. Wait, you say. That doesn’t sound like it’s about cars and it seems like the article does not mention cars, autonomous or otherwise.

That’s what I’m here for! To fill in that blank. The essential point of the article is summed up nicely in this quote.

Machine learning systems are typically designed and developed with the implicit assumption that they will be deployed in benign settings. However, many works have pointed out their vulnerability in adversarial environments.

I am in total agreement with this. It reminds me of the history of operating systems and networking where everyone was thinking, "Wow! This is so cool! I can share everything with all of my cool new friends." And those "friends" eventually turned into cryptocurrency pirates.

I am confident in my lack of faith in an exclusive machine learning solution for autonomous cars. The reason is that not only does some random unlucky circumstance need to happen organically to cause problems. Even the most remote and obscure problematic circumstance can be cultured with an adversarial AI or even fuzzing. And this is the kind of thing security researchers love to pick at.

I’m currently trying to use RNNs to analyze protein peptide sequences; adversarial meddling isn’t such a problem with that. But if deliberate human actions are involved in the data sets, as they are with uploading videos or driving on public roads, then assuming the worst is not foil hat nuttery. Even a tiny amount of computer security experience will make that clear enough.

UPDATE: If you’re quite interested in the technical details involved in the security ramifications of machine learning, here is an excellent talk on that exact subject.

--------------------------

For older posts and RSS feed see the blog archives.
Chris X Edwards © 1999-2017