Chris X Edwards

Finally unpacked all the snow bicycling gear I designed and made--still works great! My talents were wasted in San Diego. I love the snow!
2018-11-14 09:15
I'm pretty sure that California's Prop65 warnings are the boy crying wolf that will make real dangers quite a nasty surprise.
2018-11-13 20:55
I wonder how many people die from the stress of dealing with "health" "insurance" "coverage"? Struggling to avoid being one.
2018-11-13 12:41
I thought "Tits group" was the most misleading Wikipedia article name but it has some stiff competition with the "Ballcock" page.
2018-11-11 11:21
On Splenda box: "Sugars Less than 1g" but the top ingredient is glucose sugar (dextrose)! N.b. "Serving size: 1g". An utterly evil product.
2018-11-09 10:29
Blah Blah

Steam Hardware Survey

2018-07-07 09:16

One of my eccentric beliefs is that the state of the art of computing is the state of the art of gaming and not the other way around. My Linux loyalty has greatly limited my gaming agenda but it has given me an interesting perspective. When it comes to serious computing, I can’t help but notice that gamers lead the way. Consequently I like to keep informed about what the major trends are there.

Glance over this search I did for "best cpu motherboard". The word "gaming" in some form appears in 8 of the 10 first hits (it features prominently on the links of the other 2).


Are non-gaming people not interested in the "best cpu motherboard"? I’m going to say that as far as market forces go, no. Your CAD draftsman or molecular modeler or video editor or spreadsheet farmer doesn’t need the "best" today. Anything will do for them. But the gamers are always looking to push the frontiers.

A source of some interesting data is the Steam Hardware Survey. This is where Valve’s venerable gaming platform harvests a lot of data about exactly who is doing what with what.

Of course the most interesting thing about the Steam Hardware Survey to me is not necessarily the hardware. Before looking at that, let’s first look at what platforms the games are supporting these days.

Games appearing in search results constrained by OS.

Obviously these aren’t mutually exclusive. Let’s just say that everything runs on Windows. To be precise, only 23 of 48056 go missing when you limit search to just Windows. Still 24% of games available to Linux is pretty extraordinary given the situation as little as five years ago.

With the knowledge that about a quarter of the games are available to alternative platforms, what does the actual usage look like. We can check with the hardware survey and see who’s running what. Here is a very depressing plot of the OS ecosystem in gaming.

55.87 Windows 10 35.57 Windows 7 4.53 Windows 8 3.07 OSX 0.57 Linux 0.22 Windows XP 32 bit

Talk about hanging on by a thread! At least Linux grew .02% since the start of 2017.

This is a bit surprising to me since Linux has enjoyed the aforementioned huge surge of compatible games in recent years. With impressive Wine support a lot of games are running better on Linux than their original target. But still, if you’re simply a path-of-least-resistance player, passing up on a non-negligible number of games you’d like to play may not be realistic. I’m going to take these Linux numbers to represent vive-la-resistance players!

What’s also strange is that with Android, tablets, and cloud office suites, it seems like Windows is teetering on the edge of insignificance. Here’s a good article about how Windows is being internally dismantled at Microsoft. But apparently gamers didn’t get the message. I wouldn’t be shocked to see Office move to an online platform and Windows get branded as a game platform like Xbox. Clearly Valve has their work cut out to fight Microsoft’s monopoly and Linux is a key weapon in that fight. Here’s a recent article on the whole showdown.

I was kind of surprised to learn that 0.22% of Steam users were using 32 bit Windows XP. Ouch. Reminds me of some ancient computers which are forever stuck to some equipment in science labs I know.

Among the 3% of people using Macs for games, it seems like they mostly keep them up to date.

1.47 10.13.4 0.28 10.13.3 0.09 10.13.2 0.07 10.13.1 0.51 10.12.6 0.28 10.11.6 0.15 10.10.5

I think I found the language preferences most interesting. English dominates Linux users a bit more than it does in general. Here’s an interesting look at the language settings. When all OS choices are looked at, Chinese is dominant. But when you focus only on Linux users, the trends shift quite a bit to Russians and Europeans.

27.54 Chinese 10.67 Russian 4.59 Spanish 3.69 Portuguese 3.54 German 2.70 French 1.78 Korean 1.67 Polish 1.66 Turkish 1.08 Japanese 0.87 Thai 0.71 Italian 0.49 Czech 0.37 Swedish 0.35 Hungarian 0.33 Dutch 1.00 Seven Others All Steam 4.05 Russian 2.72 German 1.98 French 1.54 Spanish 1.33 Portuguese 0.81 Chinese 0.57 Polish 0.45 Italian 0.25 Japanese 0.24 Czech 0.19 Ukrainian 0.13 Hungarian 0.10 Dutch 0.35 Eight Others Linux Only

Note that these don’t add up to 100% because it is missing the majority of people who use English language settings. Note also that a lot of Russians (I can personally assure you) and Chinese and everyone else often use English despite that being a foreign language for them.

That was interesting to me with respect to Linux. But there’s much more of interest to the whole computer using world. Let’s look at the hardware.

Monitors are pretty standardized now. 60.49% have a 1920x1080 monitor. And 34.87% have two such regular monitors for a grand total of 3840x1080. I have two regular monitors set up for serious work for a grand total of 2160x1920 which really bakes Steam’s noodle.

I’m a little shocked that 60.51% have 4 CPUs but in the many years this has been standard only 4.15% have 6 CPUs and only .99% have 8. That leaves 31.40% of people using 2 CPUs like my laptop had in 2006. The conclusion would seem to be that a lack of more processing cores is not generally a serious performance bottleneck in gaming.

RAM seems to not have become much more plentiful either. There are roughly three equal groups: less than 8GB, exactly 8GB, and more than that. At 38.97% the setups with 8GB are most prevalent. But 36.67% have more than 12GB. The survey format doesn’t even explore silly levels of RAM that people like to have for VMs and video editing. But again, I think we must conclude that after 8GB, going crazy on the RAM has diminishing returns on gaming enjoyment.

Graphics cards are a rout — 15 of 16 most popular graphics cards are Nvidia GeForce. Here’s a rough breakdown of popular video cards. Don’t get too upset if I’m a couple percent off — I think the survey was too. The interesting thing to note here is that half of gamers have pretty decent cards. My guess is that the other half are making do with laptops and tablets and such.

32.46 GeForce 10xx 17.93 GeForce 9xx 10.38 GeForce 7xx 8.76 GeForce 6xx 10.20 Radeon 10.53 Intel 11.96 Other

Last year I wrote about how prices are stablizing in the computer world. Usually that’s a good thing but since computers have been getting delightfully better and cheaper historically, this feels like the end of the party.

I’m really curious how this will all turn out. Clearly Intel has some serious challenges. Here’s a nice article discussing Intel’s many strategic problems. Clearly they’ve lost to Nvidia on what matters to the serious people, the gamers. And from that gaming enthusiasm came GPUs that are now the default way technical people do advanced technical things. Intel may want to catch up but I’m not sure that will be easy.

The only prediction I will make is that gamers will continue to greatly influence what technology gets seriously developed — more than the other way around.

Review: Homo Deus

2018-07-04 16:12

Homo Deus: A Brief History Of Tomorrow is the follow up book to Yuval Noah Harari’s Sapiens (which I reviewed here). It is hard to know what to say about this book. The first blurb on the back is from the freakishly insightful Daniel Kahneman and it immediately singles out this book’s core value.

It will make you think in ways you had not thought before.

Just so you don’t think I’m being lazy here, I had a look at today’s NYT hardcover nonfiction best sellers list and for each of the top 15 books, I calculated the percentage of Amazon reviews that contained the phrase "thought-provoking". Have a look.


Yes We (Still) Can - Dan Pfeiffer

0/51 = 0.0%


Calypso - David Sedaris

4/190 = 2.1%


The Soul of America - Jon Meacham

2/231 = 0.8%


How To Change Your Mind - Michael Pollan

3/128 = 2.3%


Trump’s America - Newt Gingrich

0/65 = 0.0%


Educated - Tara Westover

27/1389 = 1.9%


Bad Blood - John Carreyrou

1/402 = 0.3%


Lincoln’s Last Trial - Fischer & Abrams

0/36 = 0.0%


The Sun Does Shine - Anthony Ray Hinton

3/211 = 1.4%


Astrophysics For People In A Hurry - Neil dG Tyson

34/2851 = 1.2%


Born Trump - Emily Fox

0/64 = 0.0%


Barracoon - Zora Neale Hurston

3/177 = 1.7%


The World As It Is - Ben Rhodes

0/73 = 0.0%


Room To Dream - Lynch & McKenna

0/9 = 0.0%


Factfulness - Hans Rosling

6/258 = 2.3%

Total for NYT NF Top15: 83/6135 = 1.4%

That exercise itself was a bit thought-provoking. Check out how Harari’s book crushes this silly metric.

Homo Deus - Yuval Noah Harari - 128/1146 = 11.2%

Let’s look at a very typical example but one that I took a special interest in. Here he’s vaguely pondering the nature of consciousness (a topic I am especially interested in) without getting too precise about what he means by that word.

Maybe we need subjective experiences in order to think about ourselves? An animal wandering the savannah and calculating its chances of survival and reproduction must represent its own actions and decisions to itself, and sometimes communicate them to other animals as well. As the brain tries to create a model of its own decisions, it gets trapped in an infinite digression, and abracadabra! Out of this loop, consciousness pops out.

Fifty years ago this might have sounded plausible, but not in 2016. Several corporations, such as Google and Tesla, are engineering autonomous cars that already cruise our roads. The algorithms controlling the autonomous car make millions of calculations each second concerning other cars, pedestrians, traffic lights and potholes. The autonomous car successfully stops at red lights, bypasses obstacles and keeps a safe distance from other vehicles — without feeling any fear. The car also needs to take itself into account and to communicate its plans and desires to the surrounding vehicles, because if it decides to swerve right, doing so will impact on their behaviour. The car does all that without any problem — but without any consciousness either. The autonomous car isn’t special. Many other computer programs make allowances for their own actions, yet none of them has developed consciousness, and none feels or desires anything.

The photo on this page (p.115) is of Waymo’s Firefly/Koala (did it even have a proper name?). I’m pretty sure this particular specimen had absolutely no ambitions to talk to other cars. Brad Templeton who advised Waymo for this project has this to say about that issue.

[V2V is] definitely not necessary for the success of the cars, and the major teams have no plans to depend on them. Since there will always be lots of vehicles (and pedestrians and deer) with no transponders, it is necessary to get to "safe enough" with just your sensors. Extra information can at best be a minor supplement. Because it will take more than a decade to get serious deployment of V2V, other plans (such as use of the 4G and 5G mobile data networks) make much more sense for such information.

In addition, it is a serious security risk, as you say, to have the driving system of the car be communicating complex messages with random cars and equipment it encounters. Since the benefits are minor and the risk is high, this is not the right approach.

I point that out because this is one of the areas I know pretty well, and it could be that Harari is doing quite a bit of such hand waving.

The first part of the book makes a surprisingly animated attack on the idea of eating meat. I eat very little meat but I also have other topics higher on the list of philosophical issues to worry about. Still, if you’re a vegetarian, you’ll like the first part of the book.

Harari spends a decent amount of time letting you know that your mind is composed of different cognitive actors. Most people who know me have been exposed to that idea before. I do like his clever term "dividual" to describe our collection of cognitive contributors.

He talks here and there about science fiction topics. The title refers to what humans may "evolve" into — what will be beyond us (Homo sapiens) on the evolutionary tree.

Hence a bolder techno-religion seeks to sever the humanist umbilical cord altogether. It foresees a world that does not revolve around the desires and experiences of any humanike beings.

But when I read that, I wondered, why so complicated? Some people can already "upgrade" themselves with an ancient medical procedure that will almost always strongly reorient a person’s priorities — castration. But Harari doesn’t talk much about why men aren’t improving their lives with that technology upgrade, so I’m not quite sold on the inevitability of fancier computerized versions.

Don’t get me wrong. I would recommend the book. It is interesting even if slightly questionable here and there. It’s decently well-written and engaging. Whatever flaws this book had, it was definitely a rare champion of "thought-provoking".

The Toaster Problem

2018-06-23 18:55

A couple of years ago I was visiting my dad and I was introduced to this toaster.


This is a fancy and somewhat expensive toaster. Just look at that fancy digital display! I don’t know about you but for me and many others toast is a breakfast thing. Sometimes I get up and make toast while other people are still sleeping. Imagine my surprise when this toaster announced the readiness of my finished toast with a shriek like a chimpanzee being disolved in acid. It scared the hell out of me! I wondered, what moron designed a breakfast food preparation appliance to literally sound like an imminent train wreck?

This would not do. Annoyingly the toaster was assembled with special security screws. That just made me even more committed. First I had to make a tool to disassemble the thing.


Having accomplished that I was able to open it up.


And here’s the obnoxious source of all the fuss.


This spec sheet shows a very similar 1205 buzzer producing 85dBA. This means that it is similar to a dump truck driving by. I was able to disable the buzzer, reassemble, and finally make toast in blessed silence.

That was a toaster problem.

It was not the toaster problem.

The toaster problem is a shorthand phrase I use when discussing autonomous vehicle technology or any futuristic technology that is heavily dependent on artificial intelligence that hasn’t quite been invented or perfected. The toaster problem is that toasters can not reliably toast a piece of bread. All toasters I know about may be able to be set up to make one piece of toast satisfactorily, but if you scale that to 9 pieces of toast, maybe on a cold morning or a hot day, well, if you want your toast perfect, you, a human, will have to keep an eye on it. No big deal usually, but the important point is this: if we can’t have autonomous toast toasting machines, how the hell are we going to have automatic machines that can perform double lane highway merges?

I focus on toasters because the technology is so banal and stupid. Not only that, but I personally can envision a solution. I believe that I could set up a camera to watch the progress of my bread toasting and with enough iterations, I could train a machine learning algorithm to produce a "toastedness" score which would actually be related to the toast you were wanting regardless of how hot the coils were at the start of the process. I’m a wee bit surprised that KitchenAid doesn’t have such a thing for $1000.

But why is this a useful phrase? It’s important because there are tons of things that are unsolved toaster problems. For example, trains have drivers. I happen to know that some trains do not and this industry is working hard on their toaster problems, but still — if 2d cars are to drive themselves, shouldn’t we be seeing 1d trains do it first?

Another example being worked on are floor cleaning machines that can operate with or without a driver. This is actually the first (2d) autonomous vehicle I ever got to ride.


A couple weeks ago I spent twenty minutes being actively driven around O’Hare airport on the ground in a fancy fleet-managed airplane. I would guess that taxiing in such a controlled environment is a toaster problem even if full gate to gate autonomous flying turns out more complex. Certainly ground support vehicles in tightly controlled airports could be easier than general passenger car transportation. Oxbotica is working on that toaster problem now.

Sometimes it can be tricky to place a technology concept on the toaster spectrum. Is an automated battleship easier than a unoccupied robocar that safely doesn’t kill cyclists? That probably depends on many complicated factors but that idea is currently more than science fiction. How about a car that can race off-road through the Mojave desert? It’s not obvious.

Certainly partial technological progress on the way to fully autonomous passenger cars implies many useful intermediate technologies. Is a car that can park itself essential before a car that can do everything that I can do as a driver? I’d say so. All the modern ADAS features are toaster problems solved.

I think one of the biggest mistakes made by a lot of autonomous car teams is to neglect the toaster problems. For example, Kitty Hawk is aiming for flying autonomous cars. I suppose if you’re ok eating burned toast, you might as well dream big.

UPDATE 2018-07-04

Car alarms. That’s way way lower than toast. And, a very similar stupid problem, the obnoxious klaxons of garbage trucks and backhoes in reverse. There will be no meaningful autonomous cars while these things exist.

UPDATE 2018-07-12

The Economist covers some projects that may literally be working on the toaster problem as part of a grander robotic chef.

Also there can be toaster problems in other areas too. For example, the fact that I can’t live under the surface of the ocean or in central Greenland is a toaster problem for people wanting to colonize Mars.

Review: Probabilistic Robotics

2018-06-07 13:09

I have been interested in Probabilistic Robotics by Sebastian Thrun, Wolfram Burgard, and Dieter Fox since I first heard about it while learning about the rocket science of Kalman filters from Professor Thrun himself in last year’s grand educational experience with Udacity (a company started by, yes, Sebastian Thrun). I was finally able to put my employer’s library to use and borrow this massive and expensive book. I found the topic to be interesting and important enough that I wanted the hardcore experience and this is definitely it!


A good summary of the book’s mission is on page 488:

Classical robotics often assumes that sensors can measure the full state of the environment. If this was always the case, then we would not have written this book! In fact, the contrary appears to be the case. In nearly all interesting real-world robotics problems, sensor limitations are a key factor.

And we learn, not only is it sensors that are not telling the truth—it turns out that actuators don’t actually do exactly what you tell them either. Oh and the maps you have or make are never quite right. These are the problems that this book tries to come to grips with.

Another way to think of it is that the existence of this book explains why a Roomba navigates the way it does (randomly). Or put another way, "stupid" easy navigation may be just as smart as fiendishly difficult hard navigation if you can get away with it. This book is not looking for the easy solution!

A big topic was SLAM which stands for Simultaneous Localization And Mapping (note Professor Thrun’s DARPA Challenge car, Stanley, in the SLAM Wikipedia page). This is where the robot is dropped into a place and has to figure out what’s there and how to reliably not hit it, even when all sensors are a bit wonky. This is fine, but I think there is more to this topic than the book even thought about (despite covering EKF SLAM, GraphSLAM, SEIF SLAM, multi-agent SLAM, etc). SLAM in rooms or controlled indoor environments which the book spent a lot of time on may be necessary for SWAT teams and turning off a serious nuclear reactor malfunction, but for everybody else (and the nuclear plant actually), just mount cameras on the walls! This may not be a terribly hard problem unless you really want it to be! But hey, what do I know?

If I had to provide a one word answer to all of the problems this book worries about, I would say: Bayes. Apparently using Bayes Theorem early and often can really provide a lot of help with these tricky problems. How exactly that is done can be tricky.


Page 233 quotes (Cox 1991) by saying, "Localization has been dubbed the most fundamental problem to providing a mobile robot with autonomous capabilities." It is definitely hard, but if they still believe that after doing some work on the autonomous car problem among idiot human drivers, then I think they need to take another look at things.

Every chapter concluded with (some freakishly microscopic print in) a section called "Bibliographical Remarks". I found this interesting because they did a decent job of summarizing the history of this weird little corner of robotics math nerdery. However, many times the saga would build up until the final word on the topic was Thrun et al. Which is fine but I sometimes wondered if I was reading a Thrun biography. On page 144, we are reminded that, "Entire books have been dedicated to filter design using inertial sensors." So it could be even more painfully specialized I suppose than Sebastian’s greatest hits which are genuinely impressive.

I was quite frustrated to read this on page 329, "Little is currently known about the optimal density of landmarks, and researchers often use intuition when selecting specific landmarks." It goes on to say, "When selecting appropriate landmarks, it is essential to maximize the perceptual distinctiveness of landmarks." I’m a big proponent of making these gruesome algorithmic/computation problems as easy as possible. Yet never is the topic of how to eliminate the uncertainty with environmental augmentation mentioned. It would be like fretting over how hard it was to train people to memorize all street names and features because putting up street and road signs would be expensive. But, hey, my little thoughts on how horrifically hard problems might be simply averted with a dirty trick are probably not appreciated by people whose job it is to solve hard problems.

There were some interesting fun bits of knowledge that I had never heard of. For example, the fact that terramechanics is a thing is interesting to me. And I learned that the Aurora CEO Chris Urmson once worked on an autonomous robot to search for meteors in Antarctica which is very cool (in many ways). That reminds me of my concept for autonomous archaeology robots which also would use a lot of the ideas from this book to make very accurate maps of where items were found.

I don’t think that this book was remarkable for a graduate level textbook, but wow, what a crappy way to teach something! The first thing to complain about is that pseudocode equals pseudoquality. As it says on page 596 "This algorithm leaves a number of important implementation questions open, hence it shall only serve as a schematic illustration." The "algorithms" were to me useless. Implementing them from the opaque pseudocode scribbled with frantic Unicode hand waving seemed no easier than thinking of a decent algorithm myself directly in code. It’s like betting someone that you climbed Mt. Everest but instead of just showing them a picture of you on the summit, you say that they’ll need to climb the mountain too to see if there really is proof of the deed up there. Just write real code! This isn’t probabilistic abstract thinking! Everyone who looks at this book will want this technology on a machine that runs software. Showing some real code could highlight good practices throughout, easily demonstrate algorithm effectiveness, and easily prove they even work at all.

I was really not delighted with the gruesome math and just unnecessarily harsh, but no doubt typical, syntax throughout. It certainly was great practice for slogging through such muck. I definitely feel more prepared to read obfuscatory stuff like this in the future. It was so baroque that it was hard for everyone to keep it straight. In Table 16.3-7, for example, there is Q(u) = 0 but then in the text on the next page it talks about it as "all Qu's." Yuck. I did not spot a rho, nu, iota, zeta, or upsilon — though I could have overlooked them during my quick census. All other Greek letters made an appearance, at least half in both forms! Did I mention that just writing software, a language that all roboticists must speak, would be much better?

Sometimes even the algorithm outline was not especially encouraging. On page 366, for example, "A good implementation of GraphSLAM will be more refined than our basic implementation discussed here." Gee thanks!

I feel like with the intense level of math, theory, and algorithms that mentioning real world robots at all may be premature. I got the feeling that all of this math would be more intelligently applied to abstract computer models only and talking about real applications just muddles things. I even was reminded of automata curiosities and that is finally explicitly mentioned (referring to Rivest and Schapire 1987a,b) in the final paragraph of the book’s text!

I sure wish I had this book’s TeX source because I would love to search and count the occurrences of these words: "straightforward", "obvious", "of course", "simply", "easily", "clearly", "standard simplification", etc. I would bet $50 that some condescending word like that appears more than 600 times, or on average at least once per page. I’ll leave that as "an exercise for the reader". Ahem. Provide some source code proof that this stuff works and then I’ll start feeling like I’m the dumb one for not having implemented it!

I’ll make a list of errors I found to give you a sense of the production quality in general.

  • p167 "…pre-cashing…"

  • p213 "…represents uncertainty due to uncertainty in the…"

  • p267 "…the type [of] sensor noise…"

  • p281 "…can easily be described [by] 105 or more variables."

  • p370 "The type map collected by the robot…" [type of map?]

  • p388 "…SEIF is an … algorithm…for which the time required is … logarithmic is data association search is involved."

  • p403 "Here mu is a vector of the same form and dimensionality as mu."

  • p411 "…sometimes the combines Markov blanket is insufficient…"

  • p414 "…but it [is] a result…"

  • p419 "Once two features in the map have [been] determined to be equivalent…"

  • p433 "…this techniques…"

  • p433 "…to attain efficient online…"

  • p460 "…advanced data structure[s]…"

  • p480 "…fact that [it] maintains…"

  • p487 "…running the risk of loosing orientation…"

  • p525 "…the vale function V2 with…"

  • p550 "xHb(x)"

  • p554 "…when b_ is a sufficient statistics of b…"

  • p592 "MCL localization" is redundant.

Really, that’s pretty good for such a massive tome (in English by German dudes, also Hut ab).

I’m glad I read this. It was definitely an experience and I feel more like grad students who have been hazed in this way, but if you really want to learn this stuff for practical applications, I’d just pay Sebastian for Term 2 of the Advanced SDCarND program and save yourself a lot of trouble. And get some working code instead of just a mental workout!

GeoGad Blended

2018-05-27 16:31

I was doing some work on tire dynamics while planning a vehicle physics engine. In the course of that project I wanted to visualize some triangles. Easy right? If you saw my previous post about learning Blender you’d think that would be especially easy for me. But strangely, it was not.

In Blender you can make an equilateral "circle" with 3 sides. You can make an icosahedral sphere (made of triangles) or a triangular fan or a cone made of triangles. You can create triangles out of rectangles with quads_convert_to_tris() or poke() and then delete the triangles you didn’t want. But simply generating a lone arbitrary triangle is weirdly hard to achieve. It is weird because from a computer science standpoint, 3d computer modeled geometry is composed only of triangles.

How about three 2d lines? Amazingly, no. Blender is also not at all ideal for modeling a simple pair of endpoints connected by a line. Weird, right? I was actually so blown away to learn this that it kicked off an odyssey of heroic software engineering designed to sort this out once and for all. That odyssey is today’s story!

I had been working on learning the arcane art of controlling Blender with Python. That topic was exactly as confusing and muddled as I expected for such a baroque piece of software. But doable. And speaking of baroque, after some experiments with Blender’s Python console, I started to have a dangerously stupid idea…

It turns out that I, Chris Edwards, have written a geometric modeler. It is called GeoGad, short for Geometry Gadget. I started this project in 2004 and until about 2014 it was really only a programming language. Yes, that’s right, a programming language. A delightful Turing complete bad-ass programming language that I’ve used pretty much every day since 2004. I know it’s bad-ass because it was heavily inspired by HP calculator RPL and if you think that was not bad-ass, you’re an idiot.

The GeoGad logo is a triangle. And yes, GeoGad’s mascot is a sloth (Motto — Slow but happy and lovable).

In 2014, I added the geometry model and its functions to the language. In my system, geometry can only consist of simple lines and triangles. The lines are visible and the triangles exist only to provide occlusion reference i.e. allow for hidden line removal. And that hidden line removal was done by a Unix command line rendering engine called to2d. That C++ program is the most obscure rendering engine in the world because I wrote that too. While developing the geometry capabilities of GeoGad, I would pipe the raw geometry model to to2d and then dump the resulting 2d vectors in a file formatted as SVG. I could have a browser constantly polling and reloading this output and there was a display system. But that was not very user friendly, even for users like me who have different ideas about what friendly means.

At this point in the story:

  • I wanted to do some simple geometry that is overly complicated in Blender and which GeoGad is especially good at.

  • I was playing around with Blender’s Python interpreter.

  • GeoGad is written in Python.

Hmmm…… Could I maybe run GeoGad inside Blender? It turns out, the answer is yes! Check out this screenshot.


This shows how I tell the Blender Python interpreter where GeoGad lives by editing the sys.path variable. Now this Blender Python can import the important components of the GeoGad system (you can see these exact components in a similar project, dated 6 years after mine, by Peter Norvig, Google’s Director of Research).

With GeoGad’s code ready to run there was just one problem — the Blender Python interpreter did not implement the Python input command. This means I couldn’t just run GeoGad interactively like I normally do. It may be possible to hook up interactive input to some GUI element, but for now I simply define a function gg() that takes as input a string of GeoGad code. And it’s ready to use! I start by running the GeoGad version command to demonstrate that everything is hooked up.

The next couple of commands import GeoGad’s memory model output code. If I’m running a text only version of GeoGad, I don’t ever need to send the memory model (the geometry) anywhere special. But the real point here is to actually control Blender. So this passes a hint to the output code about where it can use Blender functions.

And finally, I demonstrate a classic programming language test.

0 1 |[dup2 +::!] 18 repeat

Can you figure out what that complete GeoGad program does? The answer and a nice comparison to other languages can be found here.

I’ve been making this look easy, but in reality, this has been a real grind. The first obstacle was that Blender, sensibly, uses Python3. In 2004 there was no Python3 and I took this opportunity to convert the entire GeoGad code base from Python2. The next ordeal was figuring out Blender’s interface functions and what might work. Blender uses a list of points and refers to them by their position; I use a dictionary of points and refer to them by numeric ID. My way allows everything to work without changes if some points are removed from the collection. The funny thing is that both ways are the same until some points are removed. I spent quite a while figuring that out while sometimes it would work and sometimes I would get a Blender seg fault.

Once I had added a blender command to GeoGad that could reliably put GeoGad’s geometry into Blender, it was time to start writing some stuff in GeoGad to do that. I quickly got sidetracked wishing GeoGad had Vim syntax highlighting. So of course I worked on that. Once I had rough highlighting finished, I was so delighted by it I went through GeoGad and added color to the runtime interpreter. It looks great! I’m loving that new feature but annoyingly, the Blender Python console just makes a mess of it. So I had to go back and make the colors optional. You can see some of that mess on the version string which I haven’t yet fixed.

Here’s an example of a GeoGad program showing off the syntax highlighting.


Even though some of the highlighting is not quite right (p0 should be white, not mixed colors) it is already a huge improvement for me and immensely helpful.

What does this code do? Since I was frustrated about triangles, I decided to make tetrahedra. So the function (GeoGad thinks of it as a live list stored in the symbol table, but same thing) tetra makes a tetrahedron’s geometry. Then there are some functions to randomly rotate and scale something (the tetrahedron presumably). The function ptsonline calculates points on a line that are evenly spaced at the interval _i (i.e. 0.6 as shown). What this allows me to do is send some lines and replace them with a trail of random tetrahedra.

Another program that I worked on (but won’t bore you with) takes the SVG logo shown above and extracts the geometry (check this HTML page’s source to see it) and builds a GeoGad model with it. By feeding this set of lines to the program shown above, I get the following result.





That is a superb result! I’m delighted with how perfectly GeoGad’s strengths compliment Blender’s. A lot of times when I model something I want a bunch of reference lines that clearly lay out geometric constraints and known geometry that must be designed to. I generally am less concerned with how it looks and more concerned with how it is. GeoGad helps me feed Blender explicit data that adheres to hard constraints. If there’s latitude in other parts of the modeling process to sculpt something to look nice, great, that’s what Blender excels at. Having Blender and GeoGad working together is really the best of both worlds for me.


For older posts and RSS feed see the blog archives.
Chris X Edwards © 1999-2018