Chris X Edwards

I (probably) love the philosophical hedging Unity uses to describe its physics engine (on their blog 2018/11/12) - "Enhanced determinism".
2019-02-19 08:24
Traditional education says online education is insufficient but the disembodied experience of textbooks (esp the prof's!) is very enriching.
2019-02-16 12:23
Want to search for "accidentally quit firefox can't restart because it says firefox running". But of course I can't.
2019-02-14 15:08
Emergency emphatic beeping text message from Big Brother telling me to drive carefully - presumably while reading the text. Brilliant.
2019-02-13 15:59
I wonder how many times simple probability calculus was silently reinvented by people who immediately changed jobs to professional gambler.
2019-02-13 09:11
Blah Blah
--------------------------

DNS Belabored - Plus Fast Unix Dot Plotting

2019-02-13 22:03

Last weekend I ventured into some light DNS research. I wrote some scripts that did some look ups and I showed a bunch of confusing numbers and I really didn’t have anything super interesting to say about it. Don’t get your hopes up or anything because I don’t think the topic has any really interesting surprises, but I did want to finish learning what I could learn.

For the last few days I’ve been running my look up script every once in a while, this time doing 1000 name lookup operations for each DNS server on my list (yes, there may be others, but I don’t know much about them). Now I have a collection of 5000 random look ups for each server trying to resolve a random three letter domain name, 7/8 of which are ".com" and 1/8 ".org". Surely that is enough data to say something useful.

In my last post I pretty much left it with this.

I’m also a bit surprised to find the standard deviation of the look up times to be huge. What’s going on there? Hard to say. I may do some more analysis on that…

This is still true and I feel like this is the most interesting thing. My desire to understand this data led me to research how best to visualize it. Back in the surreal part of my life where I dealt with biotech people, I got to see a lot of plots of the average with an "error bar". Here’s a typical one from a paper which cites the lab I used to work for.

typical.png

Now I have to be honest, I never understood just how those error bars truly related to reality and I always secretly suspected that they did not! This particular example, plainly states that it is simply a "standard deviation of the mean" but it’s not always so clear. Long time readers with extraordinary memories may recall that I have taken a very keen interest in the philosophical foundation of modern statistical thinking. (Spoiler alert! — they range from circular to turtles to non-existant!) Here’s a typical review I wrote in 2012. And another and another, etc. So you can imagine how delighted I was to find that in today’s XKCD, the brilliant Randall Munroe illustrates my misgivings with crystal clarity. Check it out.

XKCD nails it again

That is exactly the problem with this kind of plot.

Ok, so how should one present such data? I came across this interesting perspective called Beware Of Dynamite Plots from a biotech guy, Koyama. He advocates just plotting all the data. I think his points are well made and biotech people would be well-advised to carefully think about his message.

However, for computer nerds things might be a bit different. Koyama is rightly concerned about a paucity of data which is typical with life science wet lab experiments. I, on the other hand, easily collected a data point for each of 75000 experiments before I simply got bored with the process. I do like the idea of showing that data though. I could pop that in Excel, but only new Excel because before 2007 it wouldn’t fit (so I’m told — I’m not exactly a user). You can imagine that experiments like I’m doing could have billions of data points. If they involve high speed network traffic or disk writes or something this could require some very high-performance methodology.

Wanting to show all the data I could, I decided to write a program that takes a Unix data stream (or data in a file or collection of files) and simply plots a pair of numbers to an image. That’s it. Wanting it to be very fast and efficient and only requiring 14kB of disk space, I wrote it in C.

What kind of interesting output can I get with such a sharp tool?

I first organized the data I collected to be in this form.

dnsdata
1 249
3 69
10 122
6 35
10 1512
13 40
9 626
6 132
12 262
4 252

The first field is the number of the DNS service I want to test. For example, 4 is Google’s 8.8.8.8 and 6 is Cloudflare’s 1.1.1.1. The second column is the time in milliseconds it took to look up a randomly snythesized domain.

The great thing about Unix is that you can pipe things to and fro and this allowed me to use awk to get the data scaled properly for the plot. Other than that, I just piped it to my program like this.

awk '{if ($2<4000) print $1*31 " " $2/4}' dnsdata | ./xy2png

Here I’m throwing away all data greater than 4000ms (4 seconds!) because that’s into some weird stuff that probably should be ignored. I scale the first field up by 31x to distribute the servers across the plot’s X axis and scale the times down by .25x (1/4th) to make sure 4000ms fits on a 1000 pixel plot.

And here’s what we get.

results-4s.png

I left that image raw so you can see the kinds of images my xy2png program produces. Zooming in to look at only results that came back within a half second (500ms), and then dressing it up in post production, I get something like this.

results-.5s.png

Note that the left-to-right order was the same in the previous plot but I inverted the Y to make it more customary — increasing values usually go up.

I feel like this plot is finally starting to give some sense as to what the subtle differences are in these services. For example, where average and standard deviation would have been confusing, we can easily see that Quad9 has some serious problems. Google has some odd glitches where it looks like they return something very quickly (cached?) or the next fastest lookup regime is relatively slow. OpenDNS has some snappy lookups but the bulk of their results are definitely slower than the others. If for some weird reason you like Norton the plot makes it clear that going with their secondary name server is the better bet.

The most interesting and practical result of this detailed analysis however is how Cloudflare’s excellent performance is clearly revealed. 1.1.1.1 and 1.0.0.1 are clearly the best name servers, clearly beating both omniscient Google and Verizon’s privileged network (as my ISP).

Set your DNS servers confidently to 1.1.1.1 and 1.0.0.1 and if you have an analysis that needs a bajillion points plotted in a serious hurry keep my Unix/C technique in mind.

xy2png.c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "png.h"
#define MAXLINELEN 666

void process_line(char *line, png_image i, png_bytep b) {
    const char *okchars= "0123456789.-+|, "; /* Optional: "eE" for sci notation. */
    for (int c= 0;c < strlen(line);c++) { if ( !strchr(okchars, line[c]) ) line[c]= ' '; }
    const char *delim= " |,"; char *x,*y;
    x= strtok(line, delim);
    if (x == NULL) { printf("Error: Could not parse X value.\n"); exit(1); }
    y= strtok(NULL, delim); /* When NULL, uses position in last searched string. */
    if (y == NULL) { printf("Error: Could not parse Y value.\n"); exit(1); }
    b[atoi(y) * i.width + atoi(x)]= 255;      /* Compute and set 1d position in the buffer. */
    /* if (verbose) printf("%d,%d\n",x,y); */
}

void process_file(FILE *fp, png_image i, png_bytep b){
    char *str= malloc(MAXLINELEN), *rbuf= str;
    int len= 0, bl= 0;
    if (str == NULL) {perror("Out of Memory!\n");exit(1);}
    while (fgets(rbuf,MAXLINELEN,fp)) {
        bl= strlen(rbuf); // Read buffer length.
        if (rbuf[bl-1] == '\n') { // End of buffer really is the EOL.
            process_line(str,i,b);
            free(str); // Clear and...
            str= malloc(MAXLINELEN); // ...reset this buffer.
            rbuf= str; // Reset read buffer to beginning.
            len= 0;
        } // End if EOL found.
        // Read buffer filled before line was completely input.
        else { // Add more mem and read some more of this line.
            len+= bl;
            str= realloc(str, len+MAXLINELEN); // Tack on some more memory.
            if (str == NULL) {perror("Out of Memory!\n");exit(1);}
            rbuf= str+len; // Slide the read buffer down to append position.
        } // End else add mem to this line.
    } // End while still bytes to be read from the file.
    fclose(fp);
    free(str);
}

int main(const int argc, char *argv[]) {
    png_image im; memset(&im, 0, sizeof im); /* Set up image structure and zero it out. */
    im.version= PNG_IMAGE_VERSION;           /* Encode what version of libpng made this. */
    im.height= 1000; im.width= 1000;         /* Image pixel dimensions. */
    png_bytep ibuf= malloc(PNG_IMAGE_SIZE(im)); /* Reserve image's memory buffer. */
    FILE *fp;                                /* Input file handle. */
    int optind= 0;
    if (argc == 1) {                         /* Use standard input if no files are specified. */
        fp= stdin;
        process_file(fp,im,ibuf);
    }
    else {
        while (optind<argc-1) { // Go through each file specified as an argument .
            optind++;
            if (*argv[optind] == '-') fp= stdin; // Dash as filename means use stdin here.
            else fp= fopen(argv[optind],"r");
            if (fp) process_file(fp,im,ibuf); // File pointer, fp, now correctly ascertained.
            else fprintf(stderr,"Could not open file:%s\n",argv[optind]);
        }
    }
    png_image_write_to_file (&im, "output.png", 0, ibuf, 0, NULL); /* PNG output. */
    return(EXIT_SUCCESS);
}

DNS - The Mediocre, The Bad, And The Ugly

2019-02-10 23:02

Who knows what the 12th Chief Directorate is? I didn’t know. Yet it turns out that the work this organization does is critical in preventing the gruesome deaths of hundreds of millions of people. True! These guys, like their equally obscure but important counterpart at the US NNSA, are in charge of keeping Russian nuclear stockpiles from getting into the wrong hands and causing some very serious problems. The point here is that just because you may have been unaware of something doesn’t mean that it is not extremely important to you.

That brings us to today’s topic, DNS, the Domain Name System. Like nuclear weapon stockpile security, Border Gateway Protocol security, and the security of firmware in your ISP’s networking hardware, DNS is one of those things that few non-professionals think much about yet which is in fact extremely important to everybody. Yes you! Unlike those other serious issues I mentioned, it may actually be possible for the average computer user (i.e. the average human now) to do something positive with respect to DNS.

My website is served from this host on the internet: 01000010.00100111.01100001.11010101. I went ahead and wrote it the way computers like to think of things just so I can emphasize what DNS does. Computers really have no interest in or ability to deal with a name like "www.xed.ch" in much the same way that you have no interest or ability to memorize and use that binary number. DNS is the bridge. It is the world-wide distributed look up system that translates things we can remember to things computers can actually compute. Here is a popular internet nerd trying to explain this stuff in simple terms. DNS is also a battleground on which tech giants are prosecuting their surveillance capitalism war — you, my dear end users, are the bullets.

There are a lot of serious problems with DNS. First off, it is absurdly complex. It is so complicated that professional administrators almost never use DNS in its intended hierarchical form. For example, compare the laudably correct click.e.economist.com with typical crap like awsstatic.com or ajax.googleapis.com. Or google-public-dns-b.google.com, Google’s DNS server! In case that is still somewhat opaque and confusing, what that example shows is the chaps at The Economist are using DNS properly, while tech giants Google and Amazon fail dismally. (To use the DNS system correctly as conceived, they should have used something like static.aws.amazon.com, ajax.apis.google.com, and b.public.dns.google.com — instead they condition us to expect gmtplus3malwaregoogle.com to be legitimate.) The fact that Google runs one of the most important DNS services in the world (8.8.8.8) and yet muddles their own DNS organization is probably nothing to be seriously alarmed about (though I am mildly concerned). It may be more disturbing that Google is rapaciously trying to sell your privacy for a small fraction of a "cost per click".

Amazingly, even though Google is a bit ugly, they are not so bad relatively speaking. They are providing a free-ish service and as far as I know they don’t maliciously tamper with the DNS data itself. Other providers do. That is what really bugs me. A lot. Here’s how that works — if I check the following lookup using Google’s 8.8.8.8 DNS service I get the following.

$ dig @8.8.8.8 xeddotch1xeddotch2xeddotch3xeddotch4.com | grep status
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 818

I used a long and rather impossible domain name that I believe should not exist. Indeed, it is not registered as shown by the NXDOMAIN (non-existent domain) status. However if I run that command on a variety of DNS providers, here’s what I get.

Cloudflare

NXDOMAIN

Cloudflare_2

NXDOMAIN

Comodo

NXDOMAIN

Comodo_2

NXDOMAIN

Google

NXDOMAIN

Google_2

NXDOMAIN

OpenDNS

NXDOMAIN

OpenDNS_2

NXDOMAIN

Quad9

NXDOMAIN

Quad9_2

NXDOMAIN

Norton

NOERROR

Norton_2

NOERROR

Verizon_home

NOERROR

Verizon_2_home

NOERROR

local

NOERROR

This is showing us that Norton and Verizon (my ISP!) are fielding a request for that nonsense domain and returning an answer! They are telling my client software (e.g. a browser), "Sure, that name totally exists and here is the IP address where you can find it!"

What will you find when your browser trustingly goes to that IP address? It won’t be good, whatever it is. Here is a report on some DNS malware that I was directed to under such circumstances by my former perfidious internet "service" provider, Time Warner (slinking away from its reputation now renamed as Spectrum).

So the first rule of DNS for me is: if the domain does not exist, return a "NXDOMAIN" response. To actually fulfill the request with a bogus "NOERROR" is super uncool.

Today’s entire DNS rant was kicked off by my discovery today at work that our office’s primary DNS server (specified by Verizon over DHCP) is as dead as a Norwegian Blue parrot. Rule two — the DNS servers must actually work. Note that functioning is less important than not tampering with the data!

The next thing to look for is to find a provider which at least acts concerned about your browsing privacy. Whatever the reality, Google has failed to direct such propaganda effectively at me. OpenDNS, by being utterly not "open" in any way is also failing. But I have to say that Cloudflare’s newish 1.1.1.1 is whispering the right sweet nothings, that it cares about me and my sensitive feelings, etc. Great.

Finally, let’s make sure that the DNS provider we select is not a total performance catastrophe. We could go the easy way and just look at a site like dnsperf.com. But a stronger approach is to check yourself from your own location on the internet. Not only that, but ideally you would check the kinds of look ups you are likely to do.

What kind of lookups am I likely to do? Here is a nice Unix command that reaches into the browser history’s gruesome SQL database and extracts a list of the most frequent top level domain names it has visited. (Normal people should feel free to ignore all the code in this post.)

echo "SELECT datetime(moz_historyvisits.visit_date/1000000,'unixepoch') dd, moz_places.url \
FROM moz_places, moz_historyvisits \
WHERE moz_places.id = moz_historyvisits.place_id \
AND dd > datetime('2005-01-01 12:00:00') \
AND dd < datetime('2019-03-01 12:00:00') \
ORDER BY dd;" \
| sqlite3 ~/.mozilla/firefox/*.default/places.sqlite \
| sed 's:\(^.*//[^/]*\.\([a-z][a-z]*\)/.*$\):\1 \2:' \
| cut -d' ' -f3 | sort | uniq -c | sort -n

Running that I get the following results with 10 or more hits.

     10 jp
     10 pro
     11 zone
     14 si
     15 fi
     15 se
     17 gg
     19 in
     19 nl
     19 tech
     20 fm
     20 fr
     20 name
     20 to
     21 cc
     21 it
     29 es
     29 ly
     32 nz
     34 eu
     38 mil
     41 biz
     45 mx
     46 sm
     59 int
     62 ai
     67 gl
     86 tv
     88 au
     89 me
     95 info
    112 us
    115 de
    115 ee
    132 co
    144 be
    289 ca
    700 io
    951 uk
   1786 net
   1830 gov
   3257 edu
   9728 ch
  13374 org
  26519
  81643 com

There are some interesting things to note here. First is that the third most common top level domain (TLD) that I look up in is the ".ch" of Switzerland. That is because my domain is registered in Switzerland. Your mileage may vary obviously which is why you might want such a custom analysis.

The other thing to note is the blank in the second most popular position. An error? No. What this means is that only ".com" has more web visits than going straight to an IP number directly. The reason for this is that I have my page of links, which I use dozens of times a day, bookmarked as my "homepage" with http://66.39.97.213/thepage/ — this avoids a lookup completely. I strongly advise you to take your homepage or most used web page and bookmark it with the IP number already looked up. As you can see, a pretty big percent of what would be my normal DNS requirement simply does not exist.

Clearly my traffic is dominated by .com lookups with some .org thrown in. The following program goes through the DNS providers I want to test and checks the look up time with the dig command. The domains are synthesized using random 3 character domains (like bla.com, ibm.com, etc). We can be pretty confident that all 263 combinations of three letter TLDs are taken. I’m showing it here as 7 parts .com to one part .org but you can set your own mix to match your own habits and what is important to you.

#!/bin/bash
declare -A NS
NS[local]=192.168.1.1
NS[Verizon_home]=68.237.161.12
NS[Verizon_2_home]=71.250.0.12
NS[Google]=8.8.8.8
NS[Google_2]=8.8.4.4
NS[Cloudflare]=1.1.1.1
NS[Cloudflare_2]=1.0.0.1
NS[Norton]=199.85.126.10
NS[Norton_2]=199.85.127.10
NS[Comodo]=8.26.56.26
NS[Comodo_2]=8.20.247.20
NS[Quad9]=9.9.9.9
NS[Quad9_2]=149.112.112.112
NS[OpenDNS]=208.67.222.222
NS[OpenDNS_2]=208.67.220.220

function randomain {
    TLD=( com com com com com com com org )
    NofD=${#TLD[@]}
    echo "$(tr -dc 'abcdefghijklmnopqrstuvwxyz' </dev/urandom | head -c3).${TLD[$(($RANDOM%${NofD}))]}"
}

for N in "${!NS[@]}"; do
    for C in {1..100}; do
        D=$(randomain); sleep ".$(($RANDOM%10))"
        dig @${NS[${N}]} ${D} \
        | grep "time:" | cut -d' ' -f4 | sed "s/^/${D} /"
    done
done

This program produces something like this for output. I check each DNS server 100 times. The first field is the name service, the second is the random domain tested, and the final field is the "Query time" in msec. This is the time it takes to provide the necessary service, so lower is better.

Google xbg.com 59
Google ihe.com 47
Google roo.com 340
Google wuh.com 245
Google dbm.org 44
Google gvb.com 244
Verizon_2_home kpb.com 204
Verizon_2_home rmc.com 28
Verizon_2_home mag.com 92
Verizon_2_home pqs.com 56
Verizon_2_home fth.com 354

Note that I throw in a random sleep command to look a little less like a bot harassing the servers with a flood of weird lookups. This process took about 17 minutes to run 100 lookups on all the servers. At first I ran it on a more diverse mix of TLDs and I got the following results at my office and home respectively. They are summarized with average and population standard deviation.

Table 1. Office

Google

118.59

130.235

Google_2

123.79

131.216

Cloudflare

232.77

779.159

Cloudflare_2

145.19

459.491

OpenDNS_2

236.83

343.913

OpenDNS

201.98

291.889

Norton

214.485

481.035

Norton_2

174.051

266.475

Comodo_2

277.18

396.707

Comodo

205.16

320.279

Quad9

167.24

209.143

Quad9_2

212.19

281.872

Verizon_2

108.71

125.518

local

164.61

416.003

Table 2. Home

Google

146.09

314.483

Google_2

127

174.1

Cloudflare

173.87

546.234

Cloudflare_2

119.37

418.438

OpenDNS

138.55

204.299

OpenDNS_2

212.66

469.765

Norton

175.878

212.057

Norton_2

116.869

154.985

Comodo

187.62

322.842

Comodo_2

205.88

293.423

Quad9

170.16

125.299

Quad9_2

242.374

416.948

Verizon_home

146.56

277.313

Verizon_2_home

164.88

430.531

local

118.09

148.895

Here are the results just running 7/8th as .com and 1/8th as .org at home.

Table 3. Home dotcom

Google

123.03

159.778

Google_2

104.3

106.98

Cloudflare

199.59

557.837

Cloudflare_2

178.21

498.721

OpenDNS

155.41

192.484

OpenDNS_2

135.43

172.415

Norton_2

118.525

311.591

Norton

166.786

270.136

Comodo

284.61

452.571

Comodo_2

162.96

278.82

Quad9

193.152

445.668

Quad9_2

136.84

189.899

Verizon_home

121.88

213.81

Verizon_2_home

118.99

401.479

Hmmm. Where does that leave us? My conclusion is that Google is performing pretty decently for me. Although it is not faster than my ISP’s rotten DNS service, it is delivered without tampering. Norton is removed from contention for failing that test. Despite Cloudflare’s claims and high rankings, I’m not finding them to be especially high performance. They do sound slightly more wholesome and I might go with them just for that reason. I’m also a bit surprised to find the standard deviation of the look up times to be huge. What’s going on there? Hard to say. I may do some more analysis on that but for now it looks like going with Google is not a serious performance hit over your awful ISP’s evil DNS server. And if you want to cast a vote for some privacy propaganda, Cloudflare is probably tolerable and certainly easy to remember at 1.1.1.1.

Since I’ve not discovered anything too important or conclusive, I’ll leave you with one of my favorite scenes from a 1970s movie which perfectly captures the essence of DNS.

Good luck!

XC

2019-01-30 19:36

Over forty years ago, I lost my favorite sports equipment — snow!

Today, I have it once again. I was also able to replace the corresponding kit with these cross-country skis from The Store.

Here I am in my backyard wearing Nordic skis for the first time since before the movie Star Wars existed.

xc.jpg

It took a couple of km for forty year old memories to revive but luckily I am still pretty good at cross-country skiing. It is definitely a serious high wattage activity! Which of course I like.

Today everybody seems to be freaking out about some refreshing cool weather (1F/-17C at the moment) and a bit of snow.

snow.png

The Buffalo area is shown in the green circle. I actually was kind of expecting more snow, (maybe something like this) but it will do.

Although most commerce seems to have come to a grinding halt today, I was happy to be able to do my commute by ski — that also for the first time in over forty years when I sometimes skied to school.

I not only survived but I really enjoyed it.

xccommute.jpg

I actually didn’t fully notice how cold it was this morning. After my customary driveway shovel warm-up, once out on skis I was working pretty hard. It can be tough slogging through deepish snow on those (groomed trail) skis or trying my luck with the dangerous cars on icy roads. I had fun, but I will probably just stick with biking to work which is much easier and do my skiing on the gorgeous forest trails behind my house.

I <3 Snow

2019-01-25 19:38

I am loving the weather here in Buffalo!

heartsnow.jpg

A lot of people, I have noticed, take snow as a sign to hide indoors. Or maybe they’re hibernating. Well, not me! I am outside doing something fun every day! My wife is outside every day enjoying the magnificent scenery too. So I know it’s not just me.

However, being me may be an extreme case. For one thing, I love shoveling my driveway. I’ve told many people that I actually like shoveling snow, and indeed I do.

shoveled.jpg

That’s a good workout! A bit dangerous though with the icy footing. But it’s good to shovel the driveway to see what I’m getting into for my bike ride to work. All of my normal trips from my house to work (i.e. with no trips to Canada) have been done by bike this year.

Disclaimer: I must point out that unless you’re already riding your bike in the snow, you probably shouldn’t start now. I am very good at this. And even still, it is very dangerous. I’m planning to write more about some of the details involved in surviving this but just keep in mind that it’s an extreme sport and very close to not even possible. What is entirely possible is that it will catch up to me in some gruesome way even though I am very experienced and very well-prepared. Do not imagine yourself doing this casually!

So far, it has been working out mostly well. It never really dumps enough snow to really bog me down too much and when it does, the roads are usually plowed somewhere. The problem is when you need to share those roads with idiot drivers. Then there is a heavy drop in the chances of returning alive. The university is very good about plowing its bike paths and that makes things easy once I get to the campus. But for a few hundred meters, I sometimes have to slog through some pretty serious stuff. Yesterday I rolled into this and realized that it was a huge puddle with a crust of ice on it.

slush.jpg

I couldn’t move laterally because I had carved the ice and I just had to press on through it. Today, I hit the same spot but it was now deeper and I got stuck right at the middle in about 6 inches of water. I was barely able to hop out and drag my bike with me.

Here is a video I shot of this path just past the puddle (had my telephone camera out, why not?).

Note that this was not actually very challenging or else I never would have filmed it. Remember, I am holding the camera in my (bare) right hand and steering with only one hand. That’s my employer’s telephone camera actually and I am trusting myself on that icy bridge to not go down and drop it in the river. So despite appearances, this ride was quite safe. I wanted to record it because the snowscape was spectacularly beautiful — if you like that sort of thing, and of course I do!

A much bigger challenge was a ride I did on Thursday of last week. I rode from my house to Lockport. This will be a wonderful, mild, and easy ride in the summer. It runs along the Erie Canal and it has a gorgeous bike path the whole way. This time of year, however, the experience is quite a bit different. First, it was 12F/-11C when I set off. All of my cold weather management skill worked out fine. Here I am taking off a layer because I’m too hot.

lockportride1.jpg

The trickier problem is that the path is covered in random lumps of packed snow and shards of jagged ice. The experience was very similar to riding on wet cobblestones. But hey, some people enjoy that.

When I did an earlier reconnaissance of Lockport I was puzzled by the canal being mostly empty. I was happy to find out where the water went.

lockportride2.jpg

The Pendleton Guard Lock seems to hold it back. I think it’s drained while they’re doing work on Lockport’s eponymous locks.

Here I am near Lockport cooling off.

lockportride3.jpg

It’s very important to not ever become too hot because if you sweat more than you can release as vapor, that will come back in about 15 minutes to efficiently pull all the heat out of you. I had some moments where my footwear system was right on the edge of getting cold because of this but overall I planned it well.

Here’s another one-handed video showing the trail conditions in a part of the trail that was not too hard.

In all, that ride took me four hours. I was pretty tired but the day wasn’t over! In the afternoon we went down to Canalside. (Here is a summer photo I previously posted.)

For the first time in over 40 years, I ice skated outside.

skating_.jpg

Very nice. Here’s a very short video of me skating.

Yes, I love this weather! I am feeling the exact opposite of what those poor wretched polar bears in the San Diego Zoo (live webcam) are feeling. I love it here!

snowman.jpg

Poor Weather For Normal Cyclists

2019-01-21 08:31

Shoveling the driveway this morning, I had to go back in and put on level 2 gloves. That was a clue. My driveway seemed dry but was slick like a skating rink. Another clue. However in retrospect, I’m glad I didn’t see this before the attempt.

pw4b.png

In retrospect, I’m probably lucky to have made it alive.

snow1.jpg

snow2.jpg

But no real problems on this ride. Feeling pretty smug.

snow3.jpg

It’s much safer to fight the hardest weather than the easiest idiot drivers.

--------------------------

For older posts and RSS feed see the blog archives.
Chris X Edwards © 1999-2018