Chris X Edwards

`ipa-server` has 199 dependency packages. Time to seriously rethink things.
2018-03-18 17:51
At least if BTC crashes no one will be stuck with a bunch of embarrassing Beanie Babies. Or embarrassing sub prime mortgage tranches.
2018-03-16 10:04
The sermon I'd go to church for: Why We Have A Lightning Rod On The Roof
2018-03-12 16:12
Little known fact: I like trivia games.
2018-03-10 13:41
Why no team versions of boxing? Or figure skating? Seems no less sensible than the normal form.
2018-03-09 12:42

Autonomous Vehicles Could Drive To Space

2018-03-04 13:50

I just finished reading Hieroglyph which is a collection of short science fiction stories (which has its own website). The idea behind the book was to present "stories and visions for a better future". That’s nice. What caught my attention was that Neal Stephenson was a contributor and as a fan I wanted to check that out. Stephenson’s story, Atmosphaera Incognita, was about a very tall tower. Bruce Sterling also wrote about this in a fanciful story called simply "Tall Tower", but his story was less technically serious so let’s focus on Neal’s.

Why would tall towers be a thing for optimistic science fiction? Basically when spaceships leave a planet, the difficult thing they must do is break free of the planet’s gravity. It turns out that if you can start your trip to space from a very high altitude, you can reduce the gravity problem substantially. At 35786m above the sea level of the earth, the centrifugal force that wants to fling you out to space perfectly cancels the gravity holding you back. That is a geostationary orbit. The idea of the tall tower is to get as close to that as possible; in Stephenson’s story the tower top was at 20000m altitude. From there, spacecraft can be launched quite easily radically reorienting design priorities for space travel (double payload compared to sea level).

Neal had all kinds of clever ideas for how this could be done. He mostly envisioned a steel structure being assembled on the ground and being jacked into space piece by piece. That’s fine. One of the problems with this is that between 9000m and 16000m our planet’s atmosphere has some very brutal weather called the jet stream. It’s hard to imagine a tower handling that well. Strangely, in the notes, Stephenson credits none other than Jeff Bezos with the idea to stabilize the tower using basically computer controlled aircraft engines. Although imaginative, that sounds pretty precarious to me.

Obviously above 4000m, it’s difficult for normal people to breathe and above 8000m only super human people can avoid dying without bottled oxygen. A little higher up, radiation exposure becomes a problem and it’s always extremely cold. About halfway up Neal’s tower, you really need a spacesuit.

So I read all about that and it seemed cool. Maybe doable, maybe not. I felt like if something as dumb as a tower could be worth investigating, maybe there were other dumb approaches worth looking at. My thought was a mountain.

In all of my travels around the world I’ve been quite impressed by the natural wonder of the natural world, the Grand Canyon, Denali, Iguazu Falls, etc, but what has really impressed me is barbed wire. Yup, that pernicious stuff that mapped from space completely strangles the entire USA in a fine weave. What’s important about that idea is that humans did that. Humans are bad ass when it comes to massive engineering projects. Besides the endless barbed wire (flanking every inch of the equally impressive system of roads), I’ve also seen magnificent engineering works from the tar sands of Alberta to the giant but typical landfill right in San Diego. To me the earth’s mountains are less impressive than man’s ability to flatten them.

Can humans make a mountain? There is no doubt; that is not in question. Just check out these massive mines.



The question is only can we design and build an absolutely enormous mountain far higher than any that naturally exist? I suspect that if humans were suitably motivated (as in this excellent Stephenson story), the answer is yes.

Once I started thinking about this I realized that there are some favorable design elements to this approach. First off is that it would seem to go nicely with mining. If you’re going to dig some absolutely enormous holes to get, say, copper, why not put the tailings into a big pile? Also that jet stream is less of a problem for a mountain than a tower. Instead of requiring tons of energy pumped up to active stabilizing engines, the mountain could have wind turbines at strategic altitudes.

One of the big problems with a tall tower that pokes out of the atmosphere is that it is very hostile to human activity. This is where the autonomous vehicles come in. I keep telling people that autonomous vehicles are not some futuristic technology. They exist today. You can go to your dealer and buy a fully autonomous vehicle today. People think that Waymo or Uber are leading the way in autonomous vehicles. As far as I can tell, the real leaders are Komatsu and CAT.

Check out this video of the awesome work being done by Komatsu for autonomous mine haulage.

If ever there were a technology that could create a mountain, that is it!

Currently they’ve already automated traditional mine hauling trucks, however this design has no operator cab. Perfect for driving around in space!


These autonomous mine trucks have tons of advantages for this kind of work. They can go forward or reverse with no preference. They can have complex four wheeled steering, drive, and regenerative braking that senses unstable situations. They can be very precisely positioned. A small team of humans can manage an arbitrary sized fleet. They can be powered by overhead wire or other sliding electrical interface requiring close driving precision (normally associated with rails in city trains). There is no load size that would overwhelm a human pilot’s abilities. Perfect coordination with other vehicles is possible (imagine coordinated pullouts to let an upward and downward pass on the same road).

Hopefully you can visualize how these kinds of machines could build a mountain.


The question is can a big mountain be feasibly synthesized?

What does the engineering say? Well, I’m lazy and about as much of a mining specialist as Neal Stephenson is a high altitude iron worker so this will necessarily be for inspirational purposes only. I’ve started this science fiction off by dreaming up a design I think seems plausible and efficient, but of course this is just a wild guess and smarter analyses could produce much more effective designs.

My favorite civil engineer tells me that the angle of repose can be pretty variable and 2 to 1 (width of base to height) is normal for small slopes but that taller ones may need 3:1. Natural mountains tend to settle in around 5:1 (roughly matching the steepest roads you’re likely to drive).

The design I made is more optimistic. Maybe some technology that can keep a steep slope stable is possible. Let’s pretend that’s true and look at the industrial engineering on a shape like this.


This has the three points circumscribing a triangle with a radius of 1. The height is 2. The volume is a .5 cube. If we built this in the Bolivian high desert at 4000m elevation, we might want a 15,000m high mountain. This would mean that the volume would be about a 4km cube which tells me is about 1.7 times the volume of Mount Everest. Seems about right. How hard would it be to move a 4km cube of rocks?

Let’s wildly assume that the hauled rock will be 3000kg/m3. So multiplying that by 4e9m^3 gives 1.2e13kg to haul. These trucks can move up to 400,000kg per load. This divides out into a cool 30 million loads. With enough trucks to allow one to depart full and arrive empty every second, it would take slightly less than one year. To put that in perspective differently, the DOT says that each person in the U.S. requires the movement of 40tons (36,287kg) of freight per year. For 327e6 people this works out to moving roughly the same amount as what I’m talking about.

Is this feasible? Certainly it shows that humans can move that amount of stuff if they’re really motivated. I’m just trying to rough in the concept, but obviously there is a lot of detailed design work to do. Not only do I not have a clear view of all the answers, I can barely understand all of the questions. My favorite civil engineer further points out…

Any (really good) kind of soil will be limited to say 30 psi compressive strength. The pressure at the bottom of the pile will be about 50,000 psi. Good concrete crushes at about 6000 psi. This may be a limiting design variable for mountains - which don’t usually get much above 8000 m. Modern steel of course has a yield strength of upwards of 80000 psi., so some kind of tower may be more feasible.

I don’t exactly know how a pile of dirt fails (landslides?, settling?), but apparently that could be a problem and at that scale it could be pretty serious (earthquakes?, regional weather changes?).

Still, if you can talk optimistically about an absurdly tall tower waving back and forth in the stratosphere, it seems like you should make absolutely sure a tall mountain isn’t a better solution.

Here’s my artist’s rendition of the thing set in the Bolivian Andes (high altitude plus equatorial launch latitude plus copper mining).


The mountain concept has a lot of benefits. For example, it could be an excellent designed facility for entombing nuclear waste. The Yucca Mountain fiasco reminds us that this is not easy to do. With clear up front access to the base of the mountain I also imagine a geothermal energy source could be engineered. The mountain could be built over a pretty long time horizon and each improvement would incrementally help space travel as the altitude increased.

If you were following the math, you might have noticed I did a 15km high mountain starting at 4000m; what happened to the other 1000m to 20km? It seems like you could combine the mountain idea with the tower idea. Since the geometry and features of the mountain are controlled completely, a vertical shaft as deep as the mountain can be inset. This could contain a tower which could be hoisted during launches. And although the Stephenson/Bezos tower idea seems doomed to technical failure to me, a temporary tower could be hoisted only when conditions were ideal and above jet stream and lightning problems. And this hoisted tower could be actively aligned by thrusters (to clear-line-of-site lasers) like a big vertical snake, powered through cables/pipes from the ground. This would be energy intensive and precarious, but it would extend from the center of the mountain with a spacecraft on top, do the launch, and retract.

Perhaps real astrophysicists could figure out a way to just shoot cargo into space using rail guns lined in the bore of the mountain. This mountain would also make one heck of an observatory. I could go on and on. At the very least, this idea would be a fresh addition to science fiction where hand waving is better tolerated.

Review: The High Cost Of Free Parking

2018-02-25 19:09

Last year I wrote a bit about the topic of parking and how that might be of interest to those interested in autonomous vehicles. After I wrote that, it stuck in my mind and I was observant for interesting information about parking. I kept seeing references to UCLA professor Donald Shoup’s book, The High Cost Of Free Parking. Finally I was able to get the book from the library and I just finished slogging through its 700 pages.

I’m not going to lie; that was tedious. Shoup is not a bad writer, but the editing in this book is terrible. It should have been trimmed down to about 200 pages. Shoup makes the same (absurdly good) points over and over again in a jumbled order. There is a vast landfill of industrial engineering studies and numerical data. If you are a professional city planner, you need to own this book and read every bit of it. Maybe twice.

But here’s the thing, for the rest of us, well, if you’ve got a brain the title is enough. This especially superb article in The Economist is actually an extremely sensible synopsis of the entire 700 page book. Just as I’ve suggested, the article paraphrases the title, "Free parking is not, of course, really free."

As I started reading the book, Shoup was just pounding on that idea over and over from the get go. By about page 30 I got it. Free parking is not free. Got it. By about page 60 I was horrified at myself for failing to consciously think of this disturbing fact every time I had ever parked a car in my life. How had I failed to sense this issue as one of civilization’s most important? By the end I was starting to have a mental breakdown and looking into how I can donate large sums of money to the John Birch Society.

You see, they say that communism is dead (here’s our friends at The Economist saying just that). But it is not true in one area. In Russia it thrives, in Britain it thrives, but nowhere does it thrive like the USofA! The greatest communist plot ever conceived has been enormously successful and that insidious agenda has been to give every person, as a matter of human rights, socialized parking.


You don’t need to read this book, do you? Come on, just think it through. Let’s say that every bank, veterinarian, dry cleaner, yoga studio, shooting range, Walmart, hotel, etc. was required by law to give customers all the ice cream they could eat. Who could argue with that policy? Ice cream is delicious! Are you some kind of ice cream hating monster to oppose such a brilliant plan? Well, free parking is exactly like this.

Do you think if free ice cream were mandated by law that people would be healthier? How would ice cream consumption be different? Would it really be free ice cream? No, of course not. Just as I pointed out last year that parking lots are really quite dangerous yet until 2008 nobody bothered to keep statistics on just how bad the situation was, the same is true with the cost of parking in general. Who is thinking about this explicitly? Nobody!

The reason for this is that planning departments create a situation where businesses automatically must pass the costs of parking on to customers. They do this by requiring certain levels of parking to go with certain land uses. For example, maybe a prospective hair salon is required to provide two parking spaces for every 1000 square feet of their business. The planning departments don’t pay anything or even see how much this costs. The hair salon buys N times 1000 square feet of land for their business and then another half N more land for parking. Here in California where land is absurdly expensive, this slashes proper utility of the potential space. And obviously the hair salon will pass on what costs it can and simply not exist where it can’t. This replaces consumer choice with poorly utilized wretched parking lots.

In a word, you reproach us with intending to do away with your property. Precisely so; that is just what we intend.

— Communist Manifesto

Besides the Communist Manifesto, where do these requirements come from? Shoup goes into painful detail about the provenance of these guidelines and, to my satisfaction, demonstrates that they are completely bogus and nonsensical. The planners may know the requirements, but I’m properly convinced they have no idea why those requirements exist. Mostly because they are spurious. The whole question of whether it is a good idea or not to a create a communist plot giving every human being parking "to each according to his needs" is never even considered. (I should point out that Shoup just tries to sensibly address a planning problem; the sardonic Red-baiting is mine.)

When you arrive somewhere, looking for parking sucks, right? So isn’t it good that it’s plentiful? Nope. It turns out, strangely, not really. As was pointed out in the book Traffic (which I reviewed here) building more roads doesn’t make less traffic; it just makes more cars. Ditto parking. Shoup convincingly shows that free parking actually exacerbates congestion in several different ways.

As I mentioned, parking can be quite dangerous. Of course as a society, we don’t really care about pedestrians and the last couple of cyclists need to be quietly killed off as soon as possible, but I was surprised to learn some things I didn’t already know about how parking reduces safety. For example, off street lots break up the sidewalks with dangerous curb cuts leading to dangerous car/pedestrian interactions. Shoup actually lauds San Diego’s idiotic diagonal parking which, from my point of view, is a life-threatening nightmare (especially tragicomic are the diagonal spots on W. Mission Bay Dr. that drivers are required to back into; what a circus of death that is).

Shoup talks a lot about "cruising" which is the term of art for someone who is at their destination, but can’t find a place to park. Apparently this is a much bigger problem than one would imagine (if you gave it no thought as is the custom). The pollution, congestion, and, perversely, parking problems this creates are pretty serious.

Obviously parking lots are ugly. If you disagree, just let me know about any beautiful car park anywhere in the world that you know of. I have seen some beautiful bicycle parking garages in Germany and Holland, but if cars are parking there, everyone wants to get that part of their day over with as soon as possible. I know of no exceptions.

The more parking you require, the more parking there will be which means that the distance between places will be puffed out with parking lots. Where a cross town trip in 1920 would be a mile, today, with parking lot metastasis, it would be maybe double that. I’m making up numbers because I was too tired to follow Shoup’s details here but, suffice it to say, again from obvious first principles, this sprawl comes with problems.

Although Shoup didn’t spend too much time on it, I personally suspect that having so much of a city be impermeable to rainwater can’t be good for the region’s climate. I’d also be curious about how all that asphalt changes the weather by soaking up heat in the day in an unnatural way.

Thanks to the glorious communist revolution we are basically screwed, right? Well, probably. I actually don’t think the John Birch Society is going to see this communist plot as worth fighting. But there are some things that could be done if there was a will to do them. Shoup points out that parking meters are good. Fancy modern ones are convenient and quite fair. He’s a fan of peak demand pricing. Basically the panacea, which you’d think that every gun loving red neck Ayn Rand blathering American with crappy privatized health care would be delighted to switch to immediately is…. capitalism. Yes! Free markets! What a concept! Don’t hold your breath comrade.

UPDATE 2018-02-26 Looks like this topic is on the mind of the NYTimes. They just published this article which talks about a very similar issue of just charging cars for being in congestion zones in general. The main point is that subsidizing the enormous cost of cars will make people choose to use them stupidly often. Passing on the correct market costs to drivers causes people to make more sensible decisions about driving.

UPDATE 2018-02-28 I was just reading this sensible article at Naked Capitalism about the world’s greatest welfare queens, the US military. The article says "Most Americans are probably aware that the Pentagon spends a lot of money, but it’s unlikely they grasp just how huge those sums really are. All too often, astonishingly lavish military budgets are treated as if they were part of the natural order, like death or taxes."

Parking is like this too apparently, but this reminded of something specific from Shoup’s book that I wanted to write down so I wouldn’t forget it. Shoup claims that the cost to society for providing free parking is greater than the cost of our cars. That’s pretty impressive right? He goes on to say that the cost of our free parking is actually even greater than the cost of our roads. With cars parked 95% of the time, this seems plausible I guess. But what really made it sink in for me was that Shoup claims that the cost of parking was greater than the cost of national defense. Now, maybe he’s wrong. Maybe all the math and figures he cites to demonstrate this are wrong. If you feel that’s the case, get the book and set us all straight. Details aside, I think we can all see that parking is not just not free; it is enormously expensive, perhaps contending to be the most expensive expense you can imagine.

Neural Network Classifies Signs Of Humanity

2018-01-28 16:33

By now almost everyone has had to take Google’s "I’m not a robot" test. This often involves identifying driving scenery. XKCD brilliantly notes that Google is also pioneering autonomous vehicle technologies and hmm….

XKCD Crowdsourced steering

Here’s one I had today.


They insisted that the bottom center needed to be clicked as a "road". Although I totally am not a robot, I did not concur. Parking for a nuclear powered car maybe? But not a road. Opinions differ. This stuff is hard. Even so, is it possible that a robot actually could pass this kind of test? I say yes. At least most of the time!

Remember when I mentioned a sign classifying project I did for last year’s grand educational campaign? The project was to build a neural network that could classify traffic signs, that is, select which sign was being shown in a small photo. Looking at the photos provided for training made it clear that there can be a lot of messy problems with the quality of the images.


What really shocked me was when I happened to look at the training set in order. Here’s what that looks like.


I realized that these images were collected by analyzing driving videos. The problem with this is that it radically compromises the efficacy of the training set. What it would seem to train the system to do is recognize (and ignore) small frame to frame mutations. This is hardly an ideal set of sign images for training a classifier.

I had the realization that this project could be greatly enhanced by thinking about it in a completely different way. There is no need to train a classifier to learn what different signs look like. We know what the signs look like! We know because they are the way they are by definition. If they are not within a defined specification of the Bundesministerium für Verkehr (traffic ministry) they are not the sign!

Looking at the German Traffic Sign Recognition Benchmark web site, I found a collection of decent quality representations of canonical, by definition, German traffic signs. I downloaded these 43 images (one for each type of sign) and edited the borders with a marker color (so a white face would not be confused for a white off-sign region). This allowed me to do chromakey substitution to put these signs in contrived situations. Although we can make arbitrarily high quality versions (because we know the theoretical definition of a perfect sign), here are what low resolution ideal signs look like before any other processing.


Since we know what the signs properly look like (and can tell a computer exactly what that is), the trick really is to figure out what we do want to train the classifier to look for. What we really wanted the classifier to do was learn to be invariant to (not notice) things like scale, rotation, lighting, backgrounds, and perspective. To help train the classifier, I created a program that used OpenCV to randomly morph the canonical image set into some more diverse images. It takes the canonical images and applies the following transformations.

  • chromakey - replaces the "green screen" background with something else.

  • cluttery background - fills in a random number of random shapes of random colors.

  • gaussian blur - this blurs the synthetic images to a random degree.

  • affine rotatation - rotate the images a random amount.

  • perspective warp - change the image perspective to a random one. Simulates obliqueness.

  • hsv mangle - changes the overall color and saturation of the image.

  • smallify - all of this is done to original high quality canonical images and then reduced to 32x32.

Here are some examples of completely synthetic images that I created from nothing but the knowledge of the Platonic ideal for each sign.


I tried to use this as my training set and it was interesting to note that I could classify 25% of the real image test set correctly using absolutely zero real training images. (For 43 sign types we’d expect a completely brainless random classifier to get about 2.3% right.) That’s interesting and a good start, but clearly that is not a complete solution. I actually believe I could keep working on synthesizing these images until I became really good at it. For example, there does seem to be a lot of green; I may not have replaced my chromakey as I had intended. Fixing that and 1000 other things I can imagine could be done, but there was a more expedient way.

The next thing I tried was to combine my synthetic images with the real training set. The idea was that the synthetics would train the classifier to really understand the signs themselves and the real training set would help it understand that everything else (lighting, color wash, etc) were ok to ignore. I basically added a synthetic image for every real one effectively doubling the set. Here’s a sample of the training images I ultimately used.


By being able to synthesize my own images I could play with different infusions of synthetic images. But since that’s yet another of literally thousands of knobs I could be tweaking (and waiting 10 minutes to see the effect of) I only did rough testing of this. There was clearly no point to using a much bigger training set of these synthetics, but they did contribute quite decently when doubling the normal training set.

Speaking of thousands of knobs to tune, there are literally endless permutations of ways to configure the deep neural network architecture. I wish I was some kind of super genius who could purposefully make changes to the suggested architecture and correctly anticipate a beneficial effect. Alas, I’m just a normal person and I’m not especially lucky. This means that the dozens of modifications I did try to the standard LeNet-5 architecture produced effects that were deleterious or, more often, completely catastrophic. I added layers, changed sizes, tried different activation functions, tried different optimizers, and changed the learning rate, the batch size, and epochs dozens of times. Again there is a riot of knobs which can be tweaked and I’m sure lucky people obtained great results by doing that. For me, the only helpful thing that I deliberately did differently from the way it is normally done is to supply the synthetic images.

One generally unmentioned problem that I’ll go ahead and mention is that one of the best ways to see a dramatic increase or decrease in model performance was to run it again. Yes, with the same parameters! The random fluctuations make homing in on some subtle improvement very challenging. Again, being lucky would, no doubt, be very helpful.

Eventually for the supplied photo images I was able to get a reliable test accuracy over 90% (and a validation accuracy of 93%). The real test would be to get some new photos from the wild. In acquiring new images to test the obvious place to look is Google Maps Streetview. However, I believe that Germany has created privacy laws prohibiting Google from deploying Streetview there. I am however personally familiar with much of Switzerland and they do not have this problem! Since they share similar signage with Germany, I (virtually) went through places in Switzerland I know looking for signs.

In this way I obtained 9 interesting novel sign images. Here are the versions I submitted to the classifier.


The 60kph, right curve, and roundabout sign all are quite oblique (not directly facing the camera). The road work sign is not even a sign, but a temporary marker with some confusing parts from its third dimension. There is a good variety of backgrounds. Some have geometric artifacts, some just random noise, while some are quite clear. I feel there is a good sampling of colors and shapes in this set. Here’s what my classifier thought of these novel images.

Table 1. Correct
# actual shape bg oblique best guess 2nd guess notes








round speed signs in top 5








#2 is a yellow diamond








#2 also round with blue bg








amazingly good for such oblique







no entry(5%)

#2 is round and red with a white middle

Table 2. Incorrect
# actual shape bg oblique best guess 2nd guess notes







30kph(10%) !

round speed signs for top 5







worker(4%) !

not even a proper sign but a temp sign








Rcurve(10%) was 5th, ie very uncertain!








60kph(0.2%) was 5th

I don’t think that the overall accuracy (~50%) is very relevant. I could have cherry picked easy images. The sample size is tiny. Some had known problems. Etc. Etc. What would be interesting would be to see if different cropping or image processing steps on these could improve them. But to treat them all fairly this was about the best I could expect. I’m pleased with the performance.

The only non-oblique sign classified incorrectly was the 30kph. I don’t know why that is exactly since it was actually an ideal image (maybe too perfect?). But I can see how it could be considered very similar to the 70kph sign which was its first pick. I think that oblique signs could be improved if I increased the parameters which set the severity of the perspective transform in my image synthesizer.

Back to captchas… I’m doing a pretty good job of choosing which one of 43 signs is being shown. The captchas only ask to say if there is a sign present or not, a 50/50 guess. You can see that getting that to be significantly higher would not be a terribly hard problem. So Google, maybe I am a robot!

Reveiw: Fantasyland

2018-01-21 19:17


That would be my satisfactory one word review if I liked this book. But I loved this book, so I want to say more about it.


America is batshit crazy. That is the premise of Kurt Andersen’s Fantasyland: How America Went Haywire: A 500-Year History. Note that haywire is the premise, haywire is the starting fact. If you’re good with that, then this book walks you through the entire history of the USA as seen through the lens of all that crazy which has, according to the book, been building up in the collective soul of the nation like Alzheimer’s plaques aggregating in Ronald Reagan’s neurons.

For over 400 dense but expertly crafted pages, Andersen fires away with astonishing historical trivia clearly illustrating that the USA has been doing some idiosyncratically wacky stuff ever since before it officially existed. He goes into detail about the kinds of people who, throughout history, were motivated to come to the New World and/or move west in it: get-rich-quick gold dreamers and religious nuts basically. Wave after wave of them.

He does a nice job of showing how the USA is some kind of weird hybrid offspring of the Protestant Reformation and the Enlightenment. Protestantism basically was the rejection of the more inconvenient wacky crazed superstitious nonsense promulgated by God’s special man in Rome. But once that door was open two things were possible. First, the Enlightenment where some of those less credulous believers started to take things to the next level: well, if papal indulgences, etc., are a load of pope-serving nonsense, maybe some of the other crazy stuff the religious leaders say is clergy-serving nonsense too. In fact, thought many influential Enlightenment thinkers, maybe the whole thing is a ridiculous con. So far so good with the Enlightenment.

But that same spirit of debugging dogma had another odd quirky direction it could take. Instead of asking "is all this legacy nonsense really necessary?" some newfangled Protestants were asking "shouldn’t we be adding more nonsense?" Not just something more extreme, but more wacky. The reason for this is that one of the Protestant key points was that special magical people (priests, cardinals, etc.) who stand between you and God are probably going to do a bad job. True enough! So the Protestants said, hey everyone should read the Bible themselves, interact with God, and interpret the resulting hallucinations in their own special way. In these times long before antipsychotic drugs, it seems some people really went crazy with this. The most obnoxious ones were treated like obnoxious people normally are and they finally felt enough pressure to leave civilization and the company of the not-so-obnoxious people. This was the prototypical American, the Puritan.

Of course moving to a hostile wilderness was no holiday and all the obnoxious people who were lazy and weak were weeded out. Additionally survivorship bias gave the colonists who did not die of indigenous diseases and hardships reason to double down on their belief in God’s providence. Eventually in the early days of white settlement in the American colonies, the population skewed towards obnoxious magical thinkers who were motivated, capable, eccentric, and lucky.

What about that Enlightenment? Wasn’t that doing some good? Sure. I didn’t say every early American was a witch burning religious nut. A lot of people leaned toward Christianity-lite deism which believed in the rough moral ideas of religion (it’s super uncool to kill thy neighbor) but not Christ’s putative fantastical magic acts.

For much of my life I have had the explicitly stated philosophy that the religion and crazy thinkings and beliefs of others are fine by me if for all intents and purposes they don’t affect me. That was how I conceptualized the limits of my religious tolerance. One of the main exemplars of the reduced magic deist way of thinking, Thomas Jefferson, was typical of the exact same American spirit of cautious latitude towards religious nuttiness. He had a more delightfully poetic way of putting it, "But it does me no injury for my neighbour to say there are twenty gods, or no god. It neither picks my pocket nor breaks my leg." With such an almost aggressively laid back attitude, the America project was officially started. Of course making such an attitude the literal Rule Number One gave great encouragement to people who were inclined to think up all kinds of crazy humbug. Maybe it’s the American side of me, but I am actually sympathetic to that; I feel the alternative of repression is worse.

Ideas find currency or they fade into obscurity. Richard Dawkins likened this process to biological evolution. Just as natural selection determines the outcomes of genes (a fact Andersen’s book points out is not believed by most Americans), Dawkins believed there was a cultural evolution for ideas, "memes" as he called them in this context. Back to the early USA and there are several mimetic possibilities to consider. First, perhaps the environment for ridiculous nonsensical memes was just better in America because of the temperament of Americans. Or, perhaps when there is a meme explosion under a regime of very free thinking, crazy memes are generated slightly more often. In a bubbling cauldron of absurd ideas, if you stoke the fire and get way more crazy ideas than other systems of governance, it is not axiomatic that the same cauldron will produce a commensurate amount of good sense that will keep the crazy stuff in check. There are lots of reasons that crazy American thinking could be exceptional.

Andersen’s book doesn’t get into that kind of analysis, but what it does do is catalog the entire history of all those wingnut memes which have had important and profound effects on the nation. And wow. It’s jaw dropping. And it’s not just batshit insanity that’s a problem here. Americans are super creative, make no mistake. Plausibly semi-sane American things like P.T. Barnum, Hollywood, Broadway, Vaudeville, the CIA, role playing games, novels, theme parks, modern Halloween, comics, paintball, advertising, sports, drugs, pornography, video games, etc., etc., purposefully do a stupendously good job of intermingling reality and fantasy.

This tendency to make fantasies as real as possible and the real as fantastic as possible is a quintessential American speciality. The imagination of Americans is awesome. The kind of awesome that puts dudes on the moon (or fakes it convincingly enough for me). The real point of the book is that when you have people busy working on making the mundane seem fantastic (e.g. all of advertising) and others working on making fantasies seem real (e.g. Hollywood) at some point they collide. The book makes a good case that we are seeing that now. Fake news. Reality TV. If you had to say if thing things on Facebook were ostensibly true or false, how would you even answer that?

Kurt Andersen isn’t some yokel. His writing is practiced and immaculate, yet quite lively and entertaining. It’s easy to imagine him graduating from Harvard (no longer pursuing its original mission to teach magical thinking) with honors (he did). Indeed, it is superlative prescience that he was so deep into writing this book when the apotheosis of the fantasy-reality chimera came to dominate our cultural bandwidth.

The reason this book is so topical and important is that so many people are looking at the state of the nation and basically asking, WTF? Americans couldn’t do any better than a guy like Donald Trump? Seriously? If you’re gobsmacked by the nature of the guy in office and trying to make sense of it, this book is a huge help. (I bet you are a bit in shock about the state of the nation if you’re literate enough to read a book!) It goes a long way towards explaining exactly just WTF.

And it’s not a neat tidy deal. We can’t just say that the bad people did some silly things and if we push back on that, it will all be good. How much sensational (false? false-ish?) awesomeness do you want in your news, for example? How much social media (useless depressing crap?) do you want? How much reality TV? How much celebrity? Well, the people are speaking on these issues and at this point cutting back to a more realism-based perspective isn’t looking too likely.

It’s not even enough to wish for "the truth" and a return to reality-based thinking. This new breed of Russian mail order presidents makes things very complicated. When Trump makes crazy statements that, in the real world, are patently false and gets called out for them, he simply claims that any criticism is "fake news". This reminds me a lot of logic puzzles where one guard always lies and one always tells the truth and you have to figure out which is which when one says "the other guard does not lie". Those puzzles are puzzling! Requiring the populace to constantly be thinking like that is a non-starter.

So we have a "leader" who is so entertaining (like Darth Vader) that he dominates the attention of the entire nation. Politics became entertainment, fantasy. And this government, now unmoored from reality, is a new realm. Fantasyland.

UPDATE 2018-01-29 Don’t think we’re on the other side of the looking glass? Check out this astonishing Tweet from God’s special man in Rome!

There is no such thing as harmless disinformation; trusting in falsehood can have dire consequences.

Pope Francis
— 2018-01-24 0330

The most radical antidote to the virus of falsehood is purification by the truth.

Pope Francis
— 2018-01-24 0830

GPU Machine Learning And Ferrari Battle

2018-01-17 00:47

It used to be that if you were an exclusive Linux user (guilty!) gaming was pretty much not something you did. There just were, relatively, very few games for Linux. However, that list has been growing extremely quickly in recent years thanks to Valve’s SteamOS which is really a euphemism for "Linux".

With this in mind, some time ago (a couple of years?) I purchased an ok graphics card for my son’s gaming computer. Now I’m pretty thrifty about such things and I basically wanted the cheapest hardware I could get that would work and that would reasonably play normal games normally. As a builder of custom workstations for molecular physicists, I’ve had a lot of experience with Nvidia and hardware accelerated graphics. But it turns out that rendering thousands of spherical atoms in the most complex molecules is pretty trivial compared to modern games. So much so that for the workstations I build, I like to use this silent fanless card (GeForce 8400) which is less than $40 at the moment. Works fine for many applications and lasts forever. Here’s an example of the crazy pentameric symmetry found in an HPV capsid taken from my 3 monitors, reduced from 3600x1920, driven by this humble $40 card.


But for games, it doesn’t even come close to being sufficient.

How do you choose a modern graphics card? I have to confess, I have no idea. I only recently learned that Nvidia cards had a rough scheme to how their model numbers work despite seeming completely random to me.

Eventually I purchased an Nvidia GeForce GTX 760. I thought it worked fine. Recently, my son somehow had managed to acquire a new graphics card. A better graphics card. This was the Nvidia GeForce GTX 1050 Ti. Obviously it’s better because that model number is bigger, right? My son believed it was better but we really knew very little about the bewildering (intentionally?) quagmire of gaming hardware marketing.

Take for example this benchmark.


Sadly they don’t show the GTX 1050, but based on the 1060 and 1070, you’d expect this card to be way better, right?

But then check out this benchmark which does include both. It’s better but not such a slam dunk. (Ours is the Ti version, whatever that means.)


People often come to me with breathless hype for some marketing angle they’ve been pitched for computer performance and I always caution that the only way you can be sure it will have the hoped for value is if you benchmark it on your own application. You can’t blindly trust generic benchmarks which at best might coincidentally be unlike your requirements and at worst be completely gamed. Since I had these cards and I was curious to find out what the difference between GPUs really looked like, I did some tests.

Before we return to the point of the exercise, playing awesome games awesomely, let’s take a little hardcore nerd detour into another aspect of gaming graphics cards: the zygote of our AI overlords. Yes, all that scary stuff you hear about super-intelligent AI burying you in paperclips is getting real credibility because of the miracles of machine learning that have been, strangely, enabled by the parallel linear algebra awesomeness of gaming graphics hardware.

Last year I did a lot of work with machine learning and one thing that I learned was that GPUs make the whole process go a lot faster. I was curious how valuable each of these cards was in that context. I dug out an old project I had worked on for classifying German traffic signs (which is totally a thing). I first wanted to run my classifier on a CPU to get a sense of how valuable the graphics card (i.e. the GPU) was in general.

Here is the CPU based run using a 4 core (8 with hyperthreading) 2.93GHz Intel® Core™ i7 CPU 870.

Loaded - ./xedtrainset/syncombotrain.p Training Set:   69598 samples
Loaded - ./xedtrainset/synvalid.p Training Set:   4410 samples
Loaded - ./xedtrainset/syntest.p Training Set:   12630 samples

2018-01-16 19:02:54.260119: W tensorflow/core/platform/]
The TensorFlow library wasn't compiled to use SSE4.1 instructions, but
these are available on your machine and could speed up CPU
2018-01-16 19:02:54.260143: W tensorflow/core/platform/]
The TensorFlow library wasn't compiled to use SSE4.2 instructions, but
these are available on your machine and could speed up CPU

EPOCH 1 ... Validation Accuracy= 0.927
EPOCH 2 ... Validation Accuracy= 0.951
EPOCH 3 ... Validation Accuracy= 0.973
EPOCH 4 ... Validation Accuracy= 0.968
EPOCH 5 ... Validation Accuracy= 0.958
EPOCH 6 ... Validation Accuracy= 0.980
Model saved

Test Accuracy= 0.978

real    4m42.903s
user    17m31.120s
sys     2m28.476s

So just under 5 minutes to run. I could see that all the cores were churning away and the GPU wasn’t being used. You can see some (irritating) warnings from TensorFlow (the machine learning library); apparently I have foolishly failed to compile support for some of the CPU tricks that could be used. Maybe some more performance could be squeezed out of this setup but compiling TensorFlow from source code doesn’t quite make the list of things I’ll do simply to amuse myself.

Hmm, 98% seems suspiciously high. Oh well, it doesn’t matter for benchmarking. Last year I was around 93%. Still. That’s not bad when the expected random selection would pick only 2.3% correctly.

Next I installed the version of TensorFlow that uses the GPU.

conda install -n testenv tensorflow-gpu

Now I was running it on the card that Linux reports as: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] (rev a1).

Loaded - ./xedtrainset/syncombotrain.p Training Set:   69598 samples
Loaded - ./xedtrainset/synvalid.p Training Set:   4410 samples
Loaded - ./xedtrainset/syntest.p Training Set:   12630 samples

2018-01-16 20:12:40.294673: I tensorflow/core/common_runtime/gpu/]
Found device 0 with properties:
name: GeForce GTX 1050 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.392
pciBusID 0000:01:00.0
Total memory: 3.94GiB
Free memory: 3.76GiB
2018-01-16 20:12:40.294699: I tensorflow/core/common_runtime/gpu/] DMA: 0
2018-01-16 20:12:40.294713: I tensorflow/core/common_runtime/gpu/] 0:   Y
2018-01-16 20:12:40.294726: I tensorflow/core/common_runtime/gpu/]
Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0)

EPOCH 1 ... Validation Accuracy= 0.920
EPOCH 2 ... Validation Accuracy= 0.937
EPOCH 3 ... Validation Accuracy= 0.975
EPOCH 4 ... Validation Accuracy= 0.983
EPOCH 5 ... Validation Accuracy= 0.971
EPOCH 6 ... Validation Accuracy= 0.983
Model saved

2018-01-16 20:13:18.767520: I tensorflow/core/common_runtime/gpu/]
Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0)
Test Accuracy= 0.984

real    1m7.441s
user    1m5.344s
sys     0m5.452s

You can see that it found and used the GPU. This took less than a quarter of the time that the CPU needed! Clearly GPUs make training neural networks go much faster. What about how it compares to the other card?

One caveat is that I didn’t feel like swapping the cards again, so I ran this on a different computer. This time on a six core AMD FX(tm)-6300. But this shouldn’t really matter much, right? The processing is in the card. That card identifies as: NVIDIA Corporation GK104 [GeForce GTX 760] (rev a1). Here’s what that looked like.

Loaded - ./xedtrainset/syncombotrain.p Training Set:   69598 samples
Loaded - ./xedtrainset/synvalid.p Training Set:   4410 samples
Loaded - ./xedtrainset/syntest.p Training Set:   12630 samples

2018-01-16 20:13:57.953655: I tensorflow/core/common_runtime/gpu/]
Found device 0 with properties:
name: GeForce GTX 760
major: 3 minor: 0 memoryClockRate (GHz) 1.0715
pciBusID 0000:01:00.0
Total memory: 1.95GiB
Free memory: 1.88GiB
2018-01-16 20:13:57.953694: I tensorflow/core/common_runtime/gpu/] DMA: 0
2018-01-16 20:13:57.953703: I tensorflow/core/common_runtime/gpu/] 0:   Y
2018-01-16 20:13:57.953715: I tensorflow/core/common_runtime/gpu/]
Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 760, pci bus id: 0000:01:00.0)

EPOCH 1 ... Validation Accuracy= 0.935
EPOCH 2 ... Validation Accuracy= 0.953
EPOCH 3 ... Validation Accuracy= 0.956
EPOCH 4 ... Validation Accuracy= 0.976
EPOCH 5 ... Validation Accuracy= 0.971
EPOCH 6 ... Validation Accuracy= 0.979
Model saved

2018-01-16 20:14:43.861117: I tensorflow/core/common_runtime/gpu/]
Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 760, pci bus id: 0000:01:00.0)
Test Accuracy= 0.977

real    1m0.685s
user    1m10.636s
sys     0m7.164s

As you can see, this is pretty close. I certainly wouldn’t want to spend a bunch of extra money on one of these cards over another for machine learning purposes. So that was interesting but what about where it really matters? What about game performance?

This is really tricky to quantify. Some people may have different thresholds of perception about some graphical effects. Frame rate is an important consideration in many cases, but I’m going to assume that 30 frames per second is sufficient since I’m not worrying about VR (which apparently requires 90fps). My goal was to create the setup most likely to highlight any differences in quality. I created two videos, one using each card on the same computer, and then spliced the left side of one to the right side of the other.

This video is pretty cool. In theory, it is best appreciated at 1920x1080 (full screen it maybe). Locally, it looks really good but who knows what YouTube has done to it. Even the compositing in Blender could have mutated something. Even the original encoding process on my standalone HDMI pass-through capture box could have distorted things. (This standalone capture box does produce some annoying intermittent artifacts like the left of the screen at 0:15 and the right at 0:21 — this is the capture box and has nothing to do with the cards.) And of course if you’re using Linux and Firefox you probably can’t see this in high quality anyway (ahem, thanks YouTube).

So that’s video cards for you. What may look like like hardware models with an obvious difference may not really have much of a difference. Or they might. In practice, you need to check them to really be sure. If you noticed any clear difference in the two video sources, let me know, because I didn’t see it. Frame rates for both were locked solidly at 30fps.

Speaking of incredibly small differences, how about those two laps around the Monaco Grand Prix circuit? I drove those separately (in heavy rain with manual shifting) and the driving is so consistent that they almost splice together. I’ve enjoyed playing F1 2015. This is the first time Linux people could play this franchise. The physics are as amazing as the graphics. What is completely lame, however, are the AI opponents (too annoying to include in my video). Wow they are stupid! Computer controlled cars… a very hard problem.


For older posts and RSS feed see the blog archives.
Chris X Edwards © 1999-2018