Chris X Edwards


Taxman In Deep Doodoo

2015-05-26 15:50

I tried to warn all my friends. I hope they listened.

Here’s Krebs covering the massive theft of data from the IRS through the mechanism I discussed previously. I feel vindicated in warning people since the IRS is now reporting 200,000 suspicious transcripts were requested and 100,000 were authenticated! That’s a 50% success rate by guessing using automated techniques. And that’s what the IRS is admitting to at this time.

I’ve always thought it was doubly rotten to borrow money from poor people (a tax refund is a loan being repaid) and then make those people, who are generally not accountants, fuss with the details of the transaction.

But hey, that’s what happens when you pay providers of an uncessary service enough to lobby congress to make the service compulsory. Coming Up Short

2015-05-21 20:35

At first I was in denial. There was a problem, but I figured it was me. But now I’m getting indications that it is not me. Since May 10, 2011, I’ve been using this script I wrote to make shortened URLs from the command line and, importantly, from within Vim.

 [ -z "$1" ] && echo "usage: goodotgl <filename>" && exit
 PD=$(printf '{"longUrl": "%s"}' $1)
 /usr/bin/wget -q -O- --post-data="$PD" \
 --header "Content-Type: application/json" \
 sed -n 's/ "id": "\(.*\)",/\1/p'

I wasn’t the only person to do this. Here’s a modest collection of other similar techniques. I seem to have been happily using it on March 5, 2015 but by March 16 it seems to have stopped working. At first this wasn’t shocking since URLs that need to be cleaned up are also the kinds of monsters that could easily mess up a simple shell script’s quoting and escaping. But after a few failures, I looked closer and realized that the whole system is broken (to me).

Here’s one of the curl techniques failing.

$ curl -s -d'&url=' |\
 sed -e 's/{"short_url":"//' -e 's/","added_to_history":false}/\n/'
 {"error_message":"Invalid Captcha"}

Captcha? Going to without being logged into Google’s ecosystem produces a (re)captcha in the form of a check box that affirms, and I quote, "I’m not a robot." If you wait too long, however, you’ll get a "Session expired. Please verify again." Since the captcha is so, well, easy, I imagine that Google was previously relying on their own analysis of who was a spambot and probably doing a pretty good job. My IP address asking for 5 or so anonymous URLs a month clearly was a true negative.

According to this "Method requires authorized requests." It looks like the "free" lunch is over. I’m disappointed, but not really surprised. I will no longer be using the services of But at least they have some feedback from me explaining why. Ahem.

AGI Researcher; or, The Modern Prometheus

2015-05-21 16:43

Today, in 2015, I believe that it is not possible to purchase a toaster that can reliably toast a piece of bread in a consistent fashion. Yet the recent chatter of the technophile interwebs makes it seem like our most pressing issue is preventing Skynet.

People I generally respect like Nick Bostrom, Sam Harris, and Stephen Hawking are sounding the alarm. The publicity magnet Elon Musk is also concerned. Crazy futurists and respectable science fiction authors have covered this ground before but for some reason, there seems to be a new wave of genuine concern.

I’m not buying it.

I have already noted Karl Popper’s important observation from 1953.

In constructing an induction machine we, the architects of the machine, must decide a priori what constitutes its "world"; what things are to be taken as similar or equal; and what kind of "laws" we wish the machine to be able to "discover" in its "world". In other words we must build into the machine a framework determining what is relevant or interesting in its world: the machine will have its "inborn" selection principles. The problems of similarity will have been solved for it by its makers who thus have interpreted the "world" for the machine.

Conjectures and Refutations
— Karl Popper

This may not be a factor once AI starts mutating wildly away from human design. But if AI needs to evolve just like human minds did to acquire sinister desires, I figure we’ve got a decent head start.

The ever pragmatic and sensible Economist points out some rather prosaic analogues while simultaneously hedging against an extraordinary turn of events.

But even if the prospect of what Mr Hawking calls "full" AI is still distant, it is prudent for societies to plan for how to cope. That is easier than it seems, not least because humans have been creating autonomous entities with superhuman capacities and unaligned interests for some time. Government bureaucracies, markets and armies: all can do things which unaided, unorganised humans cannot. All need autonomy to function, all can take on life of their own and all can do great harm if not set up in a just manner and governed by laws and regulations.

— Economist

The best analysis I’ve seen on the issue was an interesting coincidence for me. I am a regular reader of Tyler Cowen’s blog and he had Ramez Naam as a guest blogger covering this exact topic. I had just finished reading Naam’s excellent fiction novel Nexus (on a tip from none other than John Carmack) and was quite interested in what else he had to say. Naam has an even better excellent article about the threats (or not) of strong AI on his own website. In that article he hits upon the business I’m currently in, computational chemistry.

Computational chemistry started in the 1950s. Today we have literally trillions of times more computing power available per dollar than was available at that time. But it’s still hard. Why? Because the problem is incredibly non-linear.

I can personally affirm this by pointing out with no disrespect that the smartest people in the field know essentially nothing about how any of it works. What I mean by this is that in 100 years of X-ray crystallography’s existence, the major thing biochemists have truly learned is how much more there is yet to learn. Any kind of biochemistry is, like Naam says, extremely non-linear. We barely know what kinds of chemical bonds are possible or what magic actually makes them work. Moving from that precarious foundation to things like protein folding and molecular signalling, the complexity goes up. And not linearly.

This means that if you think we are going to comprehensively simulate organic human brains somehow with computers, I’ve got some bad news for you. If you think that we’ll be able to simulate some kinds of simplified neural networks, sure, that’s possible. But it is becoming apparent that simple neural networks, used as a standard engineering tool now, do not a superhuman intelligence make.

My own rationale for why a putative advanced AI will not be causing us too much trouble is the same as for why we haven’t killed all mosquitoes. They may be bothersome to us and we may kill any we can easily catch which are making nuisances of themselves, but despite (most of us) being smarter than the average mosquito, it would simply be a colossal waste of our resources to concern ourselves with every airborne pest of the Yukon Territory. Correspondingly, it is a huge conceit of vanity to believe that an especially intelligent being would have any interest in humans one way or another.

This whole discussion really is a type of Frankenstien (or, The Modern Prometheus) problem. Humans apparently are drawn to scary stories and the idea of a human creation somehow overpowering its creator is a classic theme in literature.

Sourcerer’s Apprentice

Stories of Midas, the tree of knowledge, genies, golems, and Faustian bargains have cautioned that getting supernatural help may not have been as great of an idea as it first seemed. For the computer age the tradition continues with R.U.R., the Czech play that introduced the word "robot" while depicting them engaged in insurrection against their human creators. That seems typical of a form of popular monster movies of the early 20th century. For example, King Kong is about a human-like force which was thought to be under control, but whose unplanned liberation turned out to cause more trouble than expected. Likewise Godzilla was inadvertently summoned by uncertain technology (nuclear weapons). This cultural environment seemed to have greatly influenced writers such as Asimov (Laws of Robotics), Arthur C. Clarke (HAL 9000), and Philip K. Dick (Blade Runner). These stories in turn have inspired newer works like Spielberg’s A.I., Her, and Transcendence.

I suspect that humans will never tire of this genre. There will always be a compelling new way to present the theme. One lesson to draw from this cultural perspective is that like Dr. Victor Frankenstein’s "fallen angel", the monster is really us.

Virtual Robots

2015-05-13 12:53

The existence of this article in MIT Technology Review is interesting to me because I am very interested in this topic. The title of the article is "Even Robots Have Their Own Virtual World". On my C.A.R. project page I list the following as one of the project’s goals.

To create a real world testing ground that can then be simulated. The purpose of this would be to compare whether simulated versions of the environment behave similarly to real ones. The point is that if you have an AI driver and it performs well in computer simulations, is there a context where that can be proven to translate correctly to a real world setting. This has big implications for autonomous car development.

The article itself is not so interesting because it doesn’t tell me anything I didn’t already know. I already know about ROS (Robot Operating System) and its affiliated project Gazebo whose web site provides this sensible justification for the idea.

Robot simulation is an essential tool in every roboticist’s toolbox. A well-designed simulator makes it possible to rapidly test algorithms, design robots, and perform regression testing using realistic scenarios. Gazebo offers the ability to accurately and efficiently simulate populations of robots in complex indoor and outdoor environments.

I was disappointed that the article did not mention anything about autonomous cars. I’m not sure that Gazebo is the ideal platform for testing autonomous vehicles, but it might work. But if there was ever a robotic technology that could benefit from heavy off-line testing in a simulated environment, it’s robotic cars.

Even if the simulated environment is very low fidelity, it would still be quite useful for catching major errors. For example, I had a painfully awful bug in my AI agent that showed up in about 20% of tracks. Here and also here are examples of what that looks like. Can you imagine that problem needing to be discovered and debugged on actual streets with expensive hardware? This bug was elusive enough even being able to slow time down and know the complete state of the universe.

At some point we’ll start to realize that these models we’ve been building into game engines can be quite useful. Just exactly how useful is the very good question I’m trying to work on. Hopefully when the importance of this kind of technology is better understood, I’ll have better opportunities to work with it.

I Love Wix

2015-05-06 15:03

I love Wix! I love it so much that I enthusiastically recommend it to the frighteningly large number of people who ask me for advice for which such a recommendation would be appropriate.

What is Wix? Wix is a cheese factory. Kind of like the modern MySpace. And why would I love that? I will explain.

When I started programming as a kid I was excited by the feeling of power, of true magic. What could be done with a computer was intense and a radical departure from the capabilities of the past. This was a potent tool that was going to open new worlds. Today, however, when I look around at what most ostensible "computer professionals" are doing I feel like 99% of them are building cheese. They are buried in the minutia required to sustain some marketing dork’s vision of "making it pop".

When people come to me because they have some conception of a web site, they invariably want to tell me what it looks like. I am not interested. I have studied art and art history in museums around the world and amassed my own large personal collection of fine art scraped from the internet. What makes it to a typical web site is not art, it is cheese.

Take the striking example of Banksy’s website. It is awful cheese. The contrast is clear because the content (anyone remember that?) is so brilliant. Just look at the slide show arrows being all wrong and poorly placed as a detailed example. The whole slideshow is simply annoying - am I clicking through it or is going by itself? Or both? Banksy’s site is "mobile friendly" but do you think Banksy’s artistic sensibilities went into the hamburger icon that appears when you restrict the display width? Do you appreciate seeing the content with less information than you’re actually downloading? Look how much more detail there’d be without the clumsy web site "features". This website’s ASP lameness is easy to overlook since it’s overshadowed by the brilliant artwork which is its content. It’s also easy to overlook the tasteless composition and confusing navigational layout because it is completely unremarkable; almost all sites on the web are no different. I definitely do not want Banksy wasting his time worrying about web site minutia.

Perhaps the surest sign of cheese is the presence of templates. (I’m undecided about whether STL counts.) Letting normal people get on with their lives is fine and choosing your style from a menu of choices is probably often a good way to go. But do not forget - if you chose "your style" from a collection of templates it is not your style. It will have the verisimilitude of creativity but do not mistake it for actual creativity.

This brings us back to Wix whose main page paradoxically says, "100s of fully customizable HTML5 templates available in every category. Choose yours and create something totally original." I’m serious. It says that. I guess they think original is sharing characteristics with the claimed "64,567,128" users divided by "100s" of templates. In the past when people with no clue about web sites (and often precious little clue about any compelling content) would have an idea for a mediocre clumsy web site that they could not implement, they would often come to me. Now I can just turn them loose on Wix. Problem solved! In other words, I’ll never have to work on creating a web site that is lamer than a Wix template. That’s a surprisingly big step forward.

The larger point should not be missed here. This does free up talented people to worry more about higher level things than struggling with the inordinate annoyances involved in a project like making a trendy web site. Maybe people can focus on higher level things like ideas that are actually interesting. This is why Wix is great.

One may look at my website as a horror of what not to do, but my website is rather illustrative. While I have taken some rudimentary steps to modernize it for practical purposes, keep in mind that my website looked very stylish in 1998. People easily forget that web site fashions change more quickly than sartorial ones. Do you want to judge a person on their clothes or their value as a person? I like to think that my website decisions (for example, using the efficient and practical Apache mod_autoindex) are a kind of subversive commentary on the state of the art. Banksy would be proud if he cared one bit. I tend to like this kind of web site though I also can appreciate stuff like this. But hey, don’t take any web site advice from me. I’m weird! Go to and I’ll get back to interesting work.

And if your work in general is boring and tedious and uninteresting, get ready for it to be taken over by the first company that brings templates of your product right to the end user. Now we just need an automated service with "hundreds of templates in every category" that turns an insipid idea for an app into an insipid app in minutes!


For older posts and RSS feed see the blog archives.
Chris X Edwards © 1999-2015