Chris X Edwards

C++ prvalue is an rvalue that is not an xvalue. Which I believe has negative P.R. value for C++.
2019-03-20 11:15
Arthur Schopenhauer, philosophy's Wolverine.
2019-03-17 12:21
"Refuse Control", my municipality's euphemism for garbage collection, sounds like the title of a punk song. Advice taken!
2019-03-11 07:07
Tested Bridgewater recruiters (who sought me out) with the harsh trenchant critical honesty they claim to value. Fail.
2019-03-07 10:32
Hey @Google, I don't do a lot of shopping in Latin America or Southeast Asia so I may seem less than human identifying storefronts there.
2019-03-05 18:46
Blah Blah

Greenland Cleans Up Its Goo

2019-03-23 16:00

This morning someone sent me a shortened URL. You might recall one of my more useful posts, How To Embiggen A Shortened URL, and guess that I don’t like to gormlessly stumble into such things without first checking for nasty surprises.

The surprise I found was not the typical spammer or malware problem you’re likely to read about on Krebs On Security. The surprise was that Google is pulling the plug on the shortening service.

From a note at the top of

You will be able to view your analytics data and download your short link information in csv format for up to one year, until March 30, 2019, when we will discontinue

This is a good opportunity to remind yourself that URL shorteners are probably a bad idea. Remember my recent posts on DNS, the whole point of which is to unshorten your host’s address from a compact 32 bits to something bigger and human readable… and now we want to immediately shrink them to something smaller and not human readable? Sounds like a flawed design, doesn’t it? In the zenith of URL shortener popularity, a common coding interview question was to outline a design for such a system — my answer was always prefaced with "Don’t go there, it’s a bad idea." It seems that Google is finally starting to see things the same way.

This post isn’t about URL shorteners, however. This post is about the unpleasant surprises which may be in store for you if you imagine that the cloud services you depend on are eternal.

I must give a special thanks to Google Reader, a service I used heavily for years, for curing me of the illusion that my favorite cloud services were not going to pull the rug out from under me. I’m currently getting emails from Google Plus telling me they’re pulling the plug on that social media site too. Hey don’t worry, I learned my lesson at Orkut!

For some reason, this list doesn’t specifically mention one of the most disturbing casualties of Google’s sewer pipe, Google Code. People trusted this service to keep important software safe. Just like they trusted and SourceForge prior to that.

This is obviously not just a Google problem. How confident do you think I am that today’s popular universally trusted "eternal" source code repository won’t be sold to a predatory company with disturbing conflicts of interest? Oh hang on, that’s already happened. But you get the idea.

Did anyone notice the recent MySpace debacle? Wherein all music uploaded to the service between 2003 and 2015 has been… lost. Forever. Whoopsie! Of course many are delighting in this since MySpace degenerated into a bit of a cringey joke, but I’m sure there are some amateur musicians who aren’t too happy to have entrusted their work to this service.

The Yahoo! fiasco! is another example of an internet titan brought low. And history is full of giant tech companies that seem permanent and eternal. Until they are gone. DEC, Data General, Wang Labs, Geocities, Sun, Silicon Graphics, AltaVista, Netscape, Palm, Compaq, Kodak, Radio Shack, National Semiconductor…

For most people, using cloud services is more reliable than doing it themselves, but a better strategy for everybody, if possible, is to do both.

DNS Belabored - Plus Fast Unix Dot Plotting

2019-02-13 22:03

Last weekend I ventured into some light DNS research. I wrote some scripts that did some look ups and I showed a bunch of confusing numbers and I really didn’t have anything super interesting to say about it. Don’t get your hopes up or anything because I don’t think the topic has any really interesting surprises, but I did want to finish learning what I could learn.

For the last few days I’ve been running my look up script every once in a while, this time doing 1000 name lookup operations for each DNS server on my list (yes, there may be others, but I don’t know much about them). Now I have a collection of 5000 random look ups for each server trying to resolve a random three letter domain name, 7/8 of which are ".com" and 1/8 ".org". Surely that is enough data to say something useful.

In my last post I pretty much left it with this.

I’m also a bit surprised to find the standard deviation of the look up times to be huge. What’s going on there? Hard to say. I may do some more analysis on that…

This is still true and I feel like this is the most interesting thing. My desire to understand this data led me to research how best to visualize it. Back in the surreal part of my life where I dealt with biotech people, I got to see a lot of plots of the average with an "error bar". Here’s a typical one from a paper which cites the lab I used to work for.


Now I have to be honest, I never understood just how those error bars truly related to reality and I always secretly suspected that they did not! This particular example, plainly states that it is simply a "standard deviation of the mean" but it’s not always so clear. Long time readers with extraordinary memories may recall that I have taken a very keen interest in the philosophical foundation of modern statistical thinking. (Spoiler alert! — they range from circular to turtles to non-existant!) Here’s a typical review I wrote in 2012. And another and another, etc. So you can imagine how delighted I was to find that in today’s XKCD, the brilliant Randall Munroe illustrates my misgivings with crystal clarity. Check it out.

XKCD nails it again

That is exactly the problem with this kind of plot.

Ok, so how should one present such data? I came across this interesting perspective called Beware Of Dynamite Plots from a biotech guy, Koyama. He advocates just plotting all the data. I think his points are well made and biotech people would be well-advised to carefully think about his message.

However, for computer nerds things might be a bit different. Koyama is rightly concerned about a paucity of data which is typical with life science wet lab experiments. I, on the other hand, easily collected a data point for each of 75000 experiments before I simply got bored with the process. I do like the idea of showing that data though. I could pop that in Excel, but only new Excel because before 2007 it wouldn’t fit (so I’m told — I’m not exactly a user). You can imagine that experiments like I’m doing could have billions of data points. If they involve high speed network traffic or disk writes or something this could require some very high-performance methodology.

Wanting to show all the data I could, I decided to write a program that takes a Unix data stream (or data in a file or collection of files) and simply plots a pair of numbers to an image. That’s it. Wanting it to be very fast and efficient and only requiring 14kB of disk space, I wrote it in C.

What kind of interesting output can I get with such a sharp tool?

I first organized the data I collected to be in this form.

1 249
3 69
10 122
6 35
10 1512
13 40
9 626
6 132
12 262
4 252

The first field is the number of the DNS service I want to test. For example, 4 is Google’s and 6 is Cloudflare’s The second column is the time in milliseconds it took to look up a randomly snythesized domain.

The great thing about Unix is that you can pipe things to and fro and this allowed me to use awk to get the data scaled properly for the plot. Other than that, I just piped it to my program like this.

awk '{if ($2<4000) print $1*31 " " $2/4}' dnsdata | ./xy2png

Here I’m throwing away all data greater than 4000ms (4 seconds!) because that’s into some weird stuff that probably should be ignored. I scale the first field up by 31x to distribute the servers across the plot’s X axis and scale the times down by .25x (1/4th) to make sure 4000ms fits on a 1000 pixel plot.

And here’s what we get.


I left that image raw so you can see the kinds of images my xy2png program produces. Zooming in to look at only results that came back within a half second (500ms), and then dressing it up in post production, I get something like this.


Note that the left-to-right order was the same in the previous plot but I inverted the Y to make it more customary — increasing values usually go up.

I feel like this plot is finally starting to give some sense as to what the subtle differences are in these services. For example, where average and standard deviation would have been confusing, we can easily see that Quad9 has some serious problems. Google has some odd glitches where it looks like they return something very quickly (cached?) or the next fastest lookup regime is relatively slow. OpenDNS has some snappy lookups but the bulk of their results are definitely slower than the others. If for some weird reason you like Norton the plot makes it clear that going with their secondary name server is the better bet.

The most interesting and practical result of this detailed analysis however is how Cloudflare’s excellent performance is clearly revealed. and are clearly the best name servers, clearly beating both omniscient Google and Verizon’s privileged network (as my ISP).

Set your DNS servers confidently to and and if you have an analysis that needs a bajillion points plotted in a serious hurry keep my Unix/C technique in mind.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "png.h"
#define MAXLINELEN 666

void process_line(char *line, png_image i, png_bytep b) {
    const char *okchars= "0123456789.-+|, "; /* Optional: "eE" for sci notation. */
    for (int c= 0;c < strlen(line);c++) { if ( !strchr(okchars, line[c]) ) line[c]= ' '; }
    const char *delim= " |,"; char *x,*y;
    x= strtok(line, delim);
    if (x == NULL) { printf("Error: Could not parse X value.\n"); exit(1); }
    y= strtok(NULL, delim); /* When NULL, uses position in last searched string. */
    if (y == NULL) { printf("Error: Could not parse Y value.\n"); exit(1); }
    b[atoi(y) * i.width + atoi(x)]= 255;      /* Compute and set 1d position in the buffer. */
    /* if (verbose) printf("%d,%d\n",x,y); */

void process_file(FILE *fp, png_image i, png_bytep b){
    char *str= malloc(MAXLINELEN), *rbuf= str;
    int len= 0, bl= 0;
    if (str == NULL) {perror("Out of Memory!\n");exit(1);}
    while (fgets(rbuf,MAXLINELEN,fp)) {
        bl= strlen(rbuf); // Read buffer length.
        if (rbuf[bl-1] == '\n') { // End of buffer really is the EOL.
            free(str); // Clear and...
            str= malloc(MAXLINELEN); // ...reset this buffer.
            rbuf= str; // Reset read buffer to beginning.
            len= 0;
        } // End if EOL found.
        // Read buffer filled before line was completely input.
        else { // Add more mem and read some more of this line.
            len+= bl;
            str= realloc(str, len+MAXLINELEN); // Tack on some more memory.
            if (str == NULL) {perror("Out of Memory!\n");exit(1);}
            rbuf= str+len; // Slide the read buffer down to append position.
        } // End else add mem to this line.
    } // End while still bytes to be read from the file.

int main(const int argc, char *argv[]) {
    png_image im; memset(&im, 0, sizeof im); /* Set up image structure and zero it out. */
    im.version= PNG_IMAGE_VERSION;           /* Encode what version of libpng made this. */
    im.height= 1000; im.width= 1000;         /* Image pixel dimensions. */
    png_bytep ibuf= malloc(PNG_IMAGE_SIZE(im)); /* Reserve image's memory buffer. */
    FILE *fp;                                /* Input file handle. */
    int optind= 0;
    if (argc == 1) {                         /* Use standard input if no files are specified. */
        fp= stdin;
    else {
        while (optind<argc-1) { // Go through each file specified as an argument .
            if (*argv[optind] == '-') fp= stdin; // Dash as filename means use stdin here.
            else fp= fopen(argv[optind],"r");
            if (fp) process_file(fp,im,ibuf); // File pointer, fp, now correctly ascertained.
            else fprintf(stderr,"Could not open file:%s\n",argv[optind]);
    png_image_write_to_file (&im, "output.png", 0, ibuf, 0, NULL); /* PNG output. */

DNS - The Mediocre, The Bad, And The Ugly

2019-02-10 23:02

Who knows what the 12th Chief Directorate is? I didn’t know. Yet it turns out that the work this organization does is critical in preventing the gruesome deaths of hundreds of millions of people. True! These guys, like their equally obscure but important counterpart at the US NNSA, are in charge of keeping Russian nuclear stockpiles from getting into the wrong hands and causing some very serious problems. The point here is that just because you may have been unaware of something doesn’t mean that it is not extremely important to you.

That brings us to today’s topic, DNS, the Domain Name System. Like nuclear weapon stockpile security, Border Gateway Protocol security, and the security of firmware in your ISP’s networking hardware, DNS is one of those things that few non-professionals think much about yet which is in fact extremely important to everybody. Yes you! Unlike those other serious issues I mentioned, it may actually be possible for the average computer user (i.e. the average human now) to do something positive with respect to DNS.

My website is served from this host on the internet: 01000010.00100111.01100001.11010101. I went ahead and wrote it the way computers like to think of things just so I can emphasize what DNS does. Computers really have no interest in or ability to deal with a name like "" in much the same way that you have no interest or ability to memorize and use that binary number. DNS is the bridge. It is the world-wide distributed look up system that translates things we can remember to things computers can actually compute. Here is a popular internet nerd trying to explain this stuff in simple terms. DNS is also a battleground on which tech giants are prosecuting their surveillance capitalism war — you, my dear end users, are the bullets.

There are a lot of serious problems with DNS. First off, it is absurdly complex. It is so complicated that professional administrators almost never use DNS in its intended hierarchical form. For example, compare the laudably correct with typical crap like or Or, Google’s DNS server! In case that is still somewhat opaque and confusing, what that example shows is the chaps at The Economist are using DNS properly, while tech giants Google and Amazon fail dismally. (To use the DNS system correctly as conceived, they should have used something like,, and — instead they condition us to expect to be legitimate.) The fact that Google runs one of the most important DNS services in the world ( and yet muddles their own DNS organization is probably nothing to be seriously alarmed about (though I am mildly concerned). It may be more disturbing that Google is rapaciously trying to sell your privacy for a small fraction of a "cost per click".

Amazingly, even though Google is a bit ugly, they are not so bad relatively speaking. They are providing a free-ish service and as far as I know they don’t maliciously tamper with the DNS data itself. Other providers do. That is what really bugs me. A lot. Here’s how that works — if I check the following lookup using Google’s DNS service I get the following.

$ dig @ | grep status
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 818

I used a long and rather impossible domain name that I believe should not exist. Indeed, it is not registered as shown by the NXDOMAIN (non-existent domain) status. However if I run that command on a variety of DNS providers, here’s what I get.































This is showing us that Norton and Verizon (my ISP!) are fielding a request for that nonsense domain and returning an answer! They are telling my client software (e.g. a browser), "Sure, that name totally exists and here is the IP address where you can find it!"

What will you find when your browser trustingly goes to that IP address? It won’t be good, whatever it is. Here is a report on some DNS malware that I was directed to under such circumstances by my former perfidious internet "service" provider, Time Warner (slinking away from its reputation now renamed as Spectrum).

So the first rule of DNS for me is: if the domain does not exist, return a "NXDOMAIN" response. To actually fulfill the request with a bogus "NOERROR" is super uncool.

Today’s entire DNS rant was kicked off by my discovery today at work that our office’s primary DNS server (specified by Verizon over DHCP) is as dead as a Norwegian Blue parrot. Rule two — the DNS servers must actually work. Note that functioning is less important than not tampering with the data!

The next thing to look for is to find a provider which at least acts concerned about your browsing privacy. Whatever the reality, Google has failed to direct such propaganda effectively at me. OpenDNS, by being utterly not "open" in any way is also failing. But I have to say that Cloudflare’s newish is whispering the right sweet nothings, that it cares about me and my sensitive feelings, etc. Great.

Finally, let’s make sure that the DNS provider we select is not a total performance catastrophe. We could go the easy way and just look at a site like But a stronger approach is to check yourself from your own location on the internet. Not only that, but ideally you would check the kinds of look ups you are likely to do.

What kind of lookups am I likely to do? Here is a nice Unix command that reaches into the browser history’s gruesome SQL database and extracts a list of the most frequent top level domain names it has visited. (Normal people should feel free to ignore all the code in this post.)

echo "SELECT datetime(moz_historyvisits.visit_date/1000000,'unixepoch') dd, moz_places.url \
FROM moz_places, moz_historyvisits \
WHERE = moz_historyvisits.place_id \
AND dd > datetime('2005-01-01 12:00:00') \
AND dd < datetime('2019-03-01 12:00:00') \
ORDER BY dd;" \
| sqlite3 ~/.mozilla/firefox/*.default/places.sqlite \
| sed 's:\(^.*//[^/]*\.\([a-z][a-z]*\)/.*$\):\1 \2:' \
| cut -d' ' -f3 | sort | uniq -c | sort -n

Running that I get the following results with 10 or more hits.

     10 jp
     10 pro
     11 zone
     14 si
     15 fi
     15 se
     17 gg
     19 in
     19 nl
     19 tech
     20 fm
     20 fr
     20 name
     20 to
     21 cc
     21 it
     29 es
     29 ly
     32 nz
     34 eu
     38 mil
     41 biz
     45 mx
     46 sm
     59 int
     62 ai
     67 gl
     86 tv
     88 au
     89 me
     95 info
    112 us
    115 de
    115 ee
    132 co
    144 be
    289 ca
    700 io
    951 uk
   1786 net
   1830 gov
   3257 edu
   9728 ch
  13374 org
  81643 com

There are some interesting things to note here. First is that the third most common top level domain (TLD) that I look up in is the ".ch" of Switzerland. That is because my domain is registered in Switzerland. Your mileage may vary obviously which is why you might want such a custom analysis.

The other thing to note is the blank in the second most popular position. An error? No. What this means is that only ".com" has more web visits than going straight to an IP number directly. The reason for this is that I have my page of links, which I use dozens of times a day, bookmarked as my "homepage" with — this avoids a lookup completely. I strongly advise you to take your homepage or most used web page and bookmark it with the IP number already looked up. As you can see, a pretty big percent of what would be my normal DNS requirement simply does not exist.

Clearly my traffic is dominated by .com lookups with some .org thrown in. The following program goes through the DNS providers I want to test and checks the look up time with the dig command. The domains are synthesized using random 3 character domains (like,, etc). We can be pretty confident that all 263 combinations of three letter TLDs are taken. I’m showing it here as 7 parts .com to one part .org but you can set your own mix to match your own habits and what is important to you.

declare -A NS

function randomain {
    TLD=( com com com com com com com org )
    echo "$(tr -dc 'abcdefghijklmnopqrstuvwxyz' </dev/urandom | head -c3).${TLD[$(($RANDOM%${NofD}))]}"

for N in "${!NS[@]}"; do
    for C in {1..100}; do
        D=$(randomain); sleep ".$(($RANDOM%10))"
        dig @${NS[${N}]} ${D} \
        | grep "time:" | cut -d' ' -f4 | sed "s/^/${D} /"

This program produces something like this for output. I check each DNS server 100 times. The first field is the name service, the second is the random domain tested, and the final field is the "Query time" in msec. This is the time it takes to provide the necessary service, so lower is better.

Google 59
Google 47
Google 340
Google 245
Google 44
Google 244
Verizon_2_home 204
Verizon_2_home 28
Verizon_2_home 92
Verizon_2_home 56
Verizon_2_home 354

Note that I throw in a random sleep command to look a little less like a bot harassing the servers with a flood of weird lookups. This process took about 17 minutes to run 100 lookups on all the servers. At first I ran it on a more diverse mix of TLDs and I got the following results at my office and home respectively. They are summarized with average and population standard deviation.

Table 1. Office











































Table 2. Home














































Here are the results just running 7/8th as .com and 1/8th as .org at home.

Table 3. Home dotcom











































Hmmm. Where does that leave us? My conclusion is that Google is performing pretty decently for me. Although it is not faster than my ISP’s rotten DNS service, it is delivered without tampering. Norton is removed from contention for failing that test. Despite Cloudflare’s claims and high rankings, I’m not finding them to be especially high performance. They do sound slightly more wholesome and I might go with them just for that reason. I’m also a bit surprised to find the standard deviation of the look up times to be huge. What’s going on there? Hard to say. I may do some more analysis on that but for now it looks like going with Google is not a serious performance hit over your awful ISP’s evil DNS server. And if you want to cast a vote for some privacy propaganda, Cloudflare is probably tolerable and certainly easy to remember at

Since I’ve not discovered anything too important or conclusive, I’ll leave you with one of my favorite scenes from a 1970s movie which perfectly captures the essence of DNS.

Good luck!


2019-01-30 19:36

Over forty years ago, I lost my favorite sports equipment — snow!

Today, I have it once again. I was also able to replace the corresponding kit with these cross-country skis from The Store.

Here I am in my backyard wearing Nordic skis for the first time since before the movie Star Wars existed.


It took a couple of km for forty year old memories to revive but luckily I am still pretty good at cross-country skiing. It is definitely a serious high wattage activity! Which of course I like.

Today everybody seems to be freaking out about some refreshing cool weather (1F/-17C at the moment) and a bit of snow.


The Buffalo area is shown in the green circle. I actually was kind of expecting more snow, (maybe something like this) but it will do.

Although most commerce seems to have come to a grinding halt today, I was happy to be able to do my commute by ski — that also for the first time in over forty years when I sometimes skied to school.

I not only survived but I really enjoyed it.


I actually didn’t fully notice how cold it was this morning. After my customary driveway shovel warm-up, once out on skis I was working pretty hard. It can be tough slogging through deepish snow on those (groomed trail) skis or trying my luck with the dangerous cars on icy roads. I had fun, but I will probably just stick with biking to work which is much easier and do my skiing on the gorgeous forest trails behind my house.

I <3 Snow

2019-01-25 19:38

I am loving the weather here in Buffalo!


A lot of people, I have noticed, take snow as a sign to hide indoors. Or maybe they’re hibernating. Well, not me! I am outside doing something fun every day! My wife is outside every day enjoying the magnificent scenery too. So I know it’s not just me.

However, being me may be an extreme case. For one thing, I love shoveling my driveway. I’ve told many people that I actually like shoveling snow, and indeed I do.


That’s a good workout! A bit dangerous though with the icy footing. But it’s good to shovel the driveway to see what I’m getting into for my bike ride to work. All of my normal trips from my house to work (i.e. with no trips to Canada) have been done by bike this year.

Disclaimer: I must point out that unless you’re already riding your bike in the snow, you probably shouldn’t start now. I am very good at this. And even still, it is very dangerous. I’m planning to write more about some of the details involved in surviving this but just keep in mind that it’s an extreme sport and very close to not even possible. What is entirely possible is that it will catch up to me in some gruesome way even though I am very experienced and very well-prepared. Do not imagine yourself doing this casually!

So far, it has been working out mostly well. It never really dumps enough snow to really bog me down too much and when it does, the roads are usually plowed somewhere. The problem is when you need to share those roads with idiot drivers. Then there is a heavy drop in the chances of returning alive. The university is very good about plowing its bike paths and that makes things easy once I get to the campus. But for a few hundred meters, I sometimes have to slog through some pretty serious stuff. Yesterday I rolled into this and realized that it was a huge puddle with a crust of ice on it.


I couldn’t move laterally because I had carved the ice and I just had to press on through it. Today, I hit the same spot but it was now deeper and I got stuck right at the middle in about 6 inches of water. I was barely able to hop out and drag my bike with me.

Here is a video I shot of this path just past the puddle (had my telephone camera out, why not?).

Note that this was not actually very challenging or else I never would have filmed it. Remember, I am holding the camera in my (bare) right hand and steering with only one hand. That’s my employer’s telephone camera actually and I am trusting myself on that icy bridge to not go down and drop it in the river. So despite appearances, this ride was quite safe. I wanted to record it because the snowscape was spectacularly beautiful — if you like that sort of thing, and of course I do!

A much bigger challenge was a ride I did on Thursday of last week. I rode from my house to Lockport. This will be a wonderful, mild, and easy ride in the summer. It runs along the Erie Canal and it has a gorgeous bike path the whole way. This time of year, however, the experience is quite a bit different. First, it was 12F/-11C when I set off. All of my cold weather management skill worked out fine. Here I am taking off a layer because I’m too hot.


The trickier problem is that the path is covered in random lumps of packed snow and shards of jagged ice. The experience was very similar to riding on wet cobblestones. But hey, some people enjoy that.

When I did an earlier reconnaissance of Lockport I was puzzled by the canal being mostly empty. I was happy to find out where the water went.


The Pendleton Guard Lock seems to hold it back. I think it’s drained while they’re doing work on Lockport’s eponymous locks.

Here I am near Lockport cooling off.


It’s very important to not ever become too hot because if you sweat more than you can release as vapor, that will come back in about 15 minutes to efficiently pull all the heat out of you. I had some moments where my footwear system was right on the edge of getting cold because of this but overall I planned it well.

Here’s another one-handed video showing the trail conditions in a part of the trail that was not too hard.

In all, that ride took me four hours. I was pretty tired but the day wasn’t over! In the afternoon we went down to Canalside. (Here is a summer photo I previously posted.)

For the first time in over 40 years, I ice skated outside.


Very nice. Here’s a very short video of me skating.

Yes, I love this weather! I am feeling the exact opposite of what those poor wretched polar bears in the San Diego Zoo (live webcam) are feeling. I love it here!



For older posts and RSS feed see the blog archives.
Chris X Edwards © 1999-2019