Chris X Edwards

There is a Quake CLI option `-nojoy` to disable joystick. The documentation assures us, "the fun factor in the game will remain the same".
2021-12-07 10:14
XSS exploit warnings simply using "C++" in search terms; worst language name ever? Well, C#, Go, and "Java"script put it in perspective.
2021-11-22 08:41
A scammer rented $1.5e6 of textbooks with gift cards and simply resold them. Is this a great crime? Or greatest crime ever?
2021-11-15 13:23
I have a superstition that many queries of carrier package tracking information raise the priority of that delivery. Because it should.
2021-11-04 09:48
After ~5 code additions with error-free compiles, I add garbage just to verify I'm actually compiling the file I think I'm working on.
2021-11-02 18:01
Blah Blah

Skate Sharpness

2021-11-29 16:24

As I’ve written about before, for someone who does not compete, I have a pair of absurdly nice ice skates. This however is the story of why I have not been ice skating in a few weeks. This graphic tells you way more about ice skates than I’m sure you care about. Rather than read all that and worry about details, just roughly absorb the point that ice skates are quite technically specific and people get very fussy about them.


The part I want to focus on is the phrase "speed skates are so sharp". How sharp? Sharp how? Why would they not be sharp? It turns out that steel does pretty well against ice; unfortunately the impurities in the ice are what grinds away your perfect edges. So, for example, when the Zamboni drives across the parking lot to dump the ice it shaves off and later comes back and drives all over the ice tracking dust and abrasive minerals… well, true story.

One of the reasons that I was interested in speed skates is that hockey skates have a complicated edge that you need to take to a hockey shop to get sharpened.


The RoH is the radius of hollow and that curve is carved into the blade’s profile using an expensive specialty grinding machine. Well, you hope it is — it’s actually kind of difficult to audit just how correctly your skates were sharpened. But speed skates are, in some respects, simpler. With speed skates, you can sharpen them yourself! The caveat, of course, is that if you own speed skates, you pretty much must sharpen them yourself.

Ok, no problem. I’m up for that kind of thing. But this is the story of why my skates were too dull to skate on safely for so long and what I finally did about it. First, we need to take a curious diversion to the world of woodworking. You may care about woodworking about as much as speed skating, but bear with me.

This superb woodworking tool is called a card scraper.


Here it is shown in use doing what it does which is — you guessed it — scraping wood. What’s amazing about this tool is that it’s just a stamped "card" of metal, nothing fancy. And yet it can produce astonishingly smooth finishes. It can shave down unruly wood grain problems as well as do useful tasks like strip old paint. Basically it does much the same thing as sandpaper, only quicker, better, and with no dust.

Sounds awesome, right? Only one tiny problem — you need to sharpen them yourself. You see the theme, right?

In this video, the titan of English woodworking, Paul Sellers, explains all about card scrapers. What I want to focus attention on is the wooden block shown below that he uses to align the scraper for perfect sharpening. I’ve seen enough of his lectures to know that the cut in that block was not made by a band saw with a precision fence. No, it was cut freehand like a freaking ninja with an English rip saw made of Sheffield steel that Paul sharpened personally with a hand file. When this man talks about sharpening things, everyone would do well to take notes! Apparently Paul thinks that a wooden jig can hold a blade in position for accurate sharpening. Good enough for me!


The actual sharpening technique for these scrapers is quite interesting. When they are new from the factory, they are quite useless and have an edge usually straight from the industrial metal shear. The first thing that must be done is the edge must be filed down so that all imperfections are ground out and it is perfectly perpendicular to the sheet. Many people use some kind of guide block, as Paul is demonstrating, to make sure this goes according to that plan. Note that this operation will kick up a burr on the sides. These scrapers are actually made of somewhat soft metal (softer than skate blades) so this burr can be deliberately controlled. Using a very hard burnishing tool or any hardened ground rod the burr is pushed off the end and then pressed down so that the profile resembles a kind of mushroom shape. That is the shape that scoops shavings out of the workpiece.


Ok, great. Back to speed skates — how are they sharpened?


As you can see, the process is quite similar. The main difference is that the end goal isn’t a controlled burr suitable for peeling thin layers of ice. The end goal for speed skates is a perfect ninety degree corner. It turns out that if that corner really is geometrically perfect with no chamfer or bevel, it will be razor sharp and reliably cut into anything softer than the steel such as ice (and jugular veins?) and not slip out of position.

Great, so how does one go about actually doing this on real skates? Normal people — well, "normal" speed skate owners — tend to also own a jig like this.


You can see the sharpening stone and the smaller deburring stone. This deburring stone simply pushes off that burr that a card scraper cuts with.

When the skates are loaded in such a jig, it looks a bit like this.


This photo shows a pretty fancy jig and there are black horizontal top stops set in pace to make sure the blades are set at exactly the same height. Once the skates are locked into final position, the stops are removed to facilitate sharpening by moving the stone back and forth over the tops (bottoms really) of the blades.

Of course I am going to be different and weird. And "thrifty" if we lived in a world where my time had no value. I had my typical thought, hmm… I wonder if I can make something myself that works as well or better. Out of junk I have lying around. It seemed like a good project to practice using Blender for design work. Here’s what I came up with.


Remember that time I got all insane about photogrammetry? And if you do, you may have wondered why the heck I didn’t simply pick a better subject! Well, this is why.


I realized that any design like this was going to be surprisingly close. Sure there are easier ways to sort this out, but I wanted to see if Blender could help me solve problems like this.


This crude model that I manually hacked out of the default cube is just not accurate enough to answer the questions that need to be answered. Mainly, is there interference with the center support? The closer the skates can be to each other, the better. Too close, however, and the whole thing is useless. Once I replace that with the photogrammetry model, you can see exactly what is going to happen.



This shows the final setup exactly as I envisioned it. I did realize a bit late that the sides of the top would be precariously attached after the critical slots for the blades were cut. I spent a while thinking about that and finally was able to sink some dowel rod in those sections before making the main cuts.


I was very happy that this is actually what the final result did look like!



That photo shows the skates just resting loosely not clamped. I was very pleased that when it is clamped, the geometry validates Blender’s ability to be very good at predicting such things.

The clearance holes for the blades are shown as small as possible and I started out with that. I thought they would be sufficient to make the sides quite flexible for clamping, but no, they were impressively stiff. I kept opening those holes until they were exactly the right tension I wanted, just enough to get the skate and reinforcing spacer in perfectly. Very easy to load now too.

I’m not delighted with Blender’s ability to provide working shop prints, but it can be done. More problematic is Blender’s reluctance to actually tell you detailed high precision measurements that I know it knows. For example, it only measures angles to the nearest whole degree (see lower right); this was insufficient and I ended up calculating them myself using trig.


Of course since I like biscuits there are biscuits all over this thing. I will say though that in terms of execution, this went the most poorly. It can be hard to get a good datum for biscuits in the middle of a board. You must clamp a reference edge and make sure it doesn’t move around. Let’s just say, it moved around more than I would have liked.


This project would seem to require machine shop accuracy, but that’s mostly not true. However, there is one critical property that must be very accurate: parallelism of the primary blade holding surfaces. My plan for that was to rely on my table saw’s rather good ability to repeat simple cuts. The trick then was to make sure the surface that was against the saw’s table was extremely flat. This would prevent rocking or warping that could cause the cuts to not be parallel.

My technique may seem silly - ok, it was silly - but it worked. I used a test indicator and a surface gauge on my glass table to measure the flatness. Yes, I know it’s not a granite surface plate. I was actually able to pretty reliably map topo contours of exactly what the surface looked like.


How did I cut the top to make it perfectly flat? My card scraper of course! You see the theme, right?

Of course I had to sharpen and burnish the scraper first which I did. With that turtle’s back ready to stand on, I was able to make the top of the skate jig quite flat - more flat than wood can even stay with normal indoor temperature changes. I was then able to make my critical parallel cuts that would support the blades.


And with those cuts made and the reinforcing shims in place, it all came together and was ready for action. Here’s what it looked like ready to go.


This shows the rough stone I started with and the fine diamond sharpening plate I did finish polishing with. It also shows more clearly how big I had to open the end holes before the wood was flexible enough to have the perfect tension for clamping. The dowel pin is visible too.


In these shots you can see the serious part of this whole thing. Serious speed skate technicians will look at this contraption and not just be horrified by the ghetto appearance of wood I literally had lying around; they will be horrified that the very premise of this fixture will "ruin" the bend on the blade. It’s actually almost possible to see the blades are straining to curve into a reverse C. Since we’re looking at the front, this means these skates are preset to turn left. Well, serious speed skate technicians, I actually skate clockwise exactly 50% of the time. So the quicker I can ruin this subtle property of the blades, the better! Obviously if you’re a real speed skating person who somehow stumbles upon this, do keep this in mind before following my example.


There are a few decent videos out there of people sharpening skates, but I want to highlight one in particular. I watched this girl sharpen her skates and when she whipped out a bubble level I initially thought, no way is that accurate enough for anything. I think what threw me was she felt she had to go find a place in her house where the floor was level. What she overlooked was that she could simply pack out the jig with card stock (common machinist trick). The idea of using a level, however was actually very smart and correct. When I was aligning my setup I realized that with an accurate level reading consistently across the blades along their lengths, you could pretty much ensure a good result. So if you really want to watch the whole process, check her out.


Ok, I just got back from the ice rink. How did they do?


Well, I was able to skate for about 80 minutes and they were definitely better than the last time I skated. It feels like I didn’t take the burr off completely and got a bit of a card scraper edge. So still not perfect but now that I have a sharpening setup that allows me to try different things, perfection is now a plausible goal.

Fight: C++ Vs. Unix

2021-11-25 01:33

Recently I’ve been hammering away deep in the software mines working on a big project in C++. Today I came across a situation that really was begging for aggressive STL tricks. (Uh, I mean "the library formerly known as STL". Today the Standard Template Library has been wrapped up into the C++ Standard Library proper. But I’m old and started programming C++ before the book I used to learn it from had heard of STL. Modern C++ is fine. Everything is fine.)

Normally I try to avoid the typically gruesome syntax and mind-bending concepts found in C++'s esoteric features du jour, but in this case I really needed a weird container to be sorted in a weird way and then made unique in a different weird way. It turns out that is something the algorithm features are especially good at. I gave it a try and it was really useful, worked very well, and saved me a ton of trouble.

Hmm… Sorting and then removing duplicates… Hmm… Well, as a unix partisan I just couldn’t help but think of how easy this very specific task is to do extemporaneously at a Bash prompt with standard unix utilities. That got me thinking: I wonder how the speed of C++ compares to command line unix? Since I’d just done the hard part of getting it to work in C++, an experiment seemed pretty easy.


First, let’s get some random number test data to sort and de-duplicate.

$ for N in {1..100}; do echo $(( $RANDOM % 999 )) >> /dev/shm/rand_nums100; done
$ for N in {1..1000000}; do echo $(( $RANDOM % 999 )) >> /dev/shm/rand_nums1000000; done

Here I’m making two files, one containing 100 random numbers between 0 and 999, and another file containing one million such numbers. This way we can check how well this scales. Note the path is /dev/shm/ so this test data lives in a virtual file system in shared memory; the goal is to eliminate any hardware latency or access issues like that.

For the C++ version here is the program I created which uses the std::sort() and std::unique() functions to do the job.

#include <iostream>
#include <string>
#include <vector>
#include <algorithm>
using namespace std;
int main () {
  vector<string> lines;                          // Also tried `list` container. Surprisingly similar.
  string tmp;
  while (getline(cin,tmp)) lines.push_back(tmp); // A decent example of linewise standard input reading.
  sort(lines.begin(),lines.end());               // *Sort*. Count stays the same so no need to seriously rearrange.
  vector<string>::iterator unique_resize_iter;   // This iterator is needed to resize after deletions from the container.
  unique_resize_iter= unique(lines.begin(),lines.end());    // Remove duplicates using *unique* algorithm function.
  lines.resize(distance(lines.begin(),unique_resize_iter)); // Repair container so it properly takes effect.
  for (string s:lines) cout<<s<<endl;
  return 0;

What could be simpler, right?

This can be compiled with something like this.

g++ -o stl_vs_unix stl_vs_unix.cxx

Now we can see how fast it runs. I’ll send the output to wc just to minimize terminal output.

$ time ./stl_vs_unix < /dev/shm/rand_nums100 | wc -l
real    0m0.008s

That’s 8 milliseconds and you can see that from 100 random numbers 6 were duplicates that got removed (leaving 94 counted lines). That’s pretty decent.

How does unix do?

$ time sort /dev/shm/rand_nums100 | uniq | wc -l
real    0m0.007s

The first thing to note is that I wasn’t kidding about how easy this is — that’s pretty easy! But the time is respectable too. That’s encouraging.

Ok, what about the bigger data set?

$ time ./stl_vs_unix < /dev/shm/rand_nums1000000 | wc -l
real    0m1.598s

Here we can see that from a million 3 digit numbers, most were obviously duplicates with all 999 possible random numbers appearing in the output. This one took about 1.6 seconds using the C++ program.

And the unix style of kung fu?

$ time sort /dev/shm/rand_nums1000000 | uniq | wc -l
real    0m0.432s

The unix command line approach got through that 3.7x quicker!

In the past I have had trouble convincing people that before getting into software engineering heroics (or worse, buying a new cluster and leaning into global warming), it’s always best to just use simple unix tools first. They are often astonishingly good! I realize that C++ isn’t the absolute superlative speedy approach and using this kind of standard library function probably is not even the most high-performance way to do this job in C++. That said, this is the canonical idiomatic modern C++ way, and C++ is well regarded for performance. For example, a lot of very serious high performance software is written in it — e.g. game engines.

Before seeing the data, I could have been persuaded that the high octane bespoke C++ program would do better as the job size increased; trivial tasks axiomatically take trivial time no matter how you do them. However, it very much looks like my bias favoring the unix command line (e.g.) may sometimes be justified. If a task seems like it would be ideal for solving with unix, it very likely is!

UPDATE 2021-11-26 I’ve just been reminded that there’s another unix command line way to do this job. Instead of using the uniq command to de-duplicate, it turns out that the sort command actually has a built-in functionality that removes duplicates as it sorts. This is using the sort -u option. That is usually the correct approach. I just did some tests and it turns out that is 13% faster for the million items of input. But interestingly it is not necessarily a slam dunk. While the sort command should be able to quickly do the easy job of comparing values for de-duplication, we must remember that by piping any of that task off to another process, the power of unix allows us to do both tasks in paralllel. It may turn out that for a large enough input with difficult comparisons that sort|uniq is actually faster (on a multi-core machine) than sort -u. The industrial engineering moral of the story is, always do your own tests with your own tasks! I talk more about some subtleties of these commands over on my unix notes: and — there I demonstrate why having a separate uniq program is actually necessary in the first place.

And just for fun, here’s a Python version.

import fileinput
data= list()
for l in fileinput.input(): data.append(l)
for o in sorted(set(data)): print(o,end='')

Here’s that run.

$ time ./ /tmp/rand_nums1000000 | wc -l
real    0m1.055s


2021-11-13 10:27

An important part of my religion is that: if there is enough snow for skiing, then there must be skiing. Today was our first snow of the year.


Given that it was 63F/17C yesterday the fact that it is snowy at all is a small miracle. Obviously the very first light flurry of the year doesn’t count for skiing, right? But this is no casual autumn flurry. It’s cranking down! I did my normal walk in the woods and was getting covered in snow. Hmmm. It may barely be feasible to do the absolute minimum skiing possible. Is it? Well, best not to anger the snow gods.


Yes! It was possible! Barely. I was compelled to try out my outrageously awesome Fischer RCS Skate Plus Nordic skate skis.


I was satisfied with their awesomeness just sitting near me in my office. To be able to just touch and hold these unbelievably light skis in my hand was worth what I paid. But today I skied on them! It wasn’t much, and it was about as bad as conditions can be and still be called skiing, but I did it. The hilarious thing is I wiped out twice. It really is extremely challenging to skate ski on a very thin layer of very wet snow. I was covered in mud when I came in. But I had a great time! And once I found the cool spots in my yard and adjusted my thinking away from classic Nordic technique it was just barely doable.

The chance of me moving back to Alaska just went up quite a bit.

And how was the weather 5 hours later in the afternoon? If you look carefully you can actually see my ski tracks in the grass.


I even finished the raking that I started yesterday. Still this morning’s snow wasn’t bad for the first of the season.

I Voted

2021-11-02 15:03

If there’s one thing Americans hate, its when loathsome beady-eyed dirty foreigners, jabbering away in their native language, tell them how to run their country. Well, today I’m going to do it anyway!

Mostly because despite living in the USA for almost 50 years, today was the first time I was legally allowed to directly participate in its "democracy". I won’t go into the insane clusterfuck of how I ended my disenfranchisement, an ordeal greatly exacerbated by The Plague, and, incredibly, still ongoing with respect to minor details not involving voting rights.


Now that I am permitted to give a shit, I find the details of the process very interesting. The first thing that struck me is how moronic the people are who believe that government can’t do anything right. Sure, it’s as easy to find flagrant examples of government stupidity as it is to find some American guy shooting random people. But if you actually participate you realize that by saying that "all government is bad", you are saying that you yourself and all your neighbors are morons. No doubt many of your neighbors are morons, but to condemn the whole government is pretty damning to the people responsible for it, i.e. The People. And whom from among that moronic lot would these ever-critics prefer to be our autocratic dear leader, finally managing affairs with clarity and competence? My strong suspicion is that these simpletons who think so poorly of government writ large are not themselves participating with the kind of assiduousness that would give them a proper perspective.

The next thing to consider is how important these off-year elections can be. In NY they are currently voting on lowering the voting age from 18 years and 10 days to 18 years. Does that make sense? Yes. I think that does make sense. But what if young people don’t typically support your views? Wouldn’t it be statistically "better" to disenfranchise some of them? Well, that’s how some people apparently think about these matters. (The correct answer is "no" by the way.) You might not think that detail is so profoundly important in the grand scheme of things. However the amendment that removes highly problematic impediments to absentee voting is. NY is a big and serious state. If they join California in allowing people to not have to take off work to inconveniently vote on election day at a polling place, other states may follow along. Before you know it, we may start to have a real democracy that includes working people who don’t have the luxury to take a chunk of time off on Tuesdays during working hours, or old people who have a hard time getting to a polling place and waiting in line. The reason this amendment is so important isn’t because of this year’s county comptroller seat or whatever. This year’s decision about absentee voting could easily have a dramatic effect on more publicized elections to come. Oh, and there’s a fucking Plague.

Now I’m a novice voter to be sure, but I feel like I’ve had an insight that might be useful. I could easily be overlooking something, and if so, please email me and let me know! Let’s do this with hypotheticals: imagine you have a country with basically two parties. If the parties are M and N, I personally am an X. Think alphabetically. Are you following? Both M and N disgust me personally. Sure I’ll vote for N down the line against Ms but it sure would be nice to see a Q, S, or U, etc. That’s unlikely to happen but what I really do not want to see is an F or even an H or a J. This is actually the second time I participated in the election process because I actually voted back in the primary. This is a very, very weird system, and it is a bad system. Sorry, American comrades. It is seriously flawed. What this primary system encouraged me to do was vote for a P from among the N, P, and three Ms on offer. I did not do this.


I did not do this because I did not register as an N. I am a registered M. Yes, it is embarrassing when I imagine the poll worker looking at my voter registration card, however, as far as I can tell, it is the right thing to do. Here’s why. Imagine that the M party has an official platform of insulting immigrants' hats. Some of the more extreme factions, the K party, want mandatory SWAT team assaults that smash down all immigrants' front doors and set fire to their hats. I could have registered as an N, the party that wants to ignore immigrants, and voted for the P faction that wants to ban throwing most types of rocks at immigrants. But as someone sympathetic to immigrants (i.e. an American with "traditional values"), wouldn’t my better move be to direct my only limited influence to checking extreme anti-immigrant sentiment?

The problem is actually more serious in that these primaries pit M’s against other M’s and the only people who notice are themselves M’s. It is little wonder that in this contest the candidates will strive to out M the other guy — I’ll see your anti-immigrant raving and raise the stakes with a toothbrush moustache! The basic idea is that I don’t actually care about the subtle differences between the anodyne candidates slightly closer to my perspective. What I really want to ensure is that if my preferred candidates are ultimately not popular, the alternative isn’t deeply horrific and embarrassing to me.

A nice property of this strategic approach is that rather than drive a wedge between people, it encourages some oversight on the looniest of factions. Rather than trying to become as rabidly polarizing as the opposition, it would be better to reach across the aisle and limit the polarizing tendency in the first place. The worst case situation here is if all M’s register as N’s and vice versa. If that happened, you’d soon see orthodox inflammatory issues sensibly attenuated. It might then be possible to do some proper sensible debugging on the important codebase we call government. Keep your friends close and your enemies closer.

Obsessive Photogrammetry

2021-10-30 17:00

(Note that this is a long technical article that engineers and computer science people might find interesting. Everyone else is encouraged to at least scroll down and look at the images which tell the story at a higher level and are pretty cool.)

Whenever I talk to people about machine learning or AI, I can guarantee that the conversation will be well worth their time because of one tiny piece of crucial advice I always give. That advice is:

If you do not see how an AI system fails, you do not properly know how it succeeds.

Seems simple and almost obvious, but the world is full of AI hucksters trying to squeeze money out of credulous VC whales by sweeping the inevitable failures under the rug. That’s all AI systems — they all fail in some way, often spectacularly and comically.

So that’s my AI golden wisdom for you — learn from the mistakes. This is not about AI or machine learning at all. No neural net voodoo is at work in any part of today’s topic which is the magical art of photogrammetry. (Well, as a semantic segmentation specialist, let’s leave that exotic variant for another day.) However, my golden wisdom about evaluating AI applies equally well to photogrammetry. And today I will show you some failures!

First, what is photogrammetry? Since airplanes first started flying over mountains, people got the bright idea to take photos and try to figure out how tall the mountains were from the photos. Extracting 3d data for topographical maps from 2d aerial photography was one of the first main applications of photogrammetry. This discussion is about the more generalized modern automatic technique using computers, an idea I’ve been interested in for a long time.

Way back in the early 1990s, I was working in a machine shop (i.e. 3d printing for steel) and I was making lots of 3d computer models and programming CNC machines. I also had a hobby of photographing classical sculpture. With that background I wondered, is it possible to have a computer automatically convert a photograph into a 3d model? The question for my entire pipeline basically was, can a computer sculpt? I dreamed of showing a computer a model and having it carve that subject out of steel. I decided at that time that, yes, it was plausible. I had no idea how to do it of course! Though far from established mathematically, I reasoned that if some human could look at a subject and proceed to carve it convincingly into a block of marble, it must be possible to take 2d images of a scene and produce 3d models from them.

It turns out I was correct. We know this now because, 30 years later, it is a highly refined process that is mathematically well explored and implemented. Photogrammetry is exactly a technique for creating 3d models out of 2d images. When I pondered the problem long before people had ever seen digital cameras, I got stuck on how concave features could be ascertained. But with full rich photographic bitmaps, there are some clever tricks that solve this problem.

Here’s a rough basic rundown of how the clever modern process works.

Step One - Collect a lot of photographs of your subject from all angles. Consistent camera optics are helpful. Redundant coverage of the same part of the subject in multiple shots is important. Anywhere from a handful to a couple hundred images are needed depending on the goals for the final outcome.

Step Two - Feature detection is a process where an algorithm looks for distinctive patterns and notes a summary of that pattern along with its location. One common one is known as SIFT, the Scale Invariant Feature Transformation — sounds arcane, but the scale invariant part just means that it can still spot a feature even if you change camera angles (the scale). It transforms the "features" into something usable and comparable. These features tend to be something like a histogram of gradients, i.e. the distribution of a certain profile of intensity changes.

Step Three - Feature matching looks at all the feature summaries found in all the photos and tries to figure out if any are showing the same thing but from different angles. These features will never be identical so there’s some adjustment for how sensitive a match needs to be.

Step Four - Figuring out where these matched features are in space is called Structure From Motion. Imagine someone takes a bunch of photos of a football goal from different positions (the "motion" part) in the stands. The feature detection of step two notes where the corners of the lines and goal posts are. Step three figures out which are which. This step now triangulates what the geometry must be for this previously calculated information to be rational. What I find impressive is that this step computes where in the stands the photographer was when each photo was taken!

Step Five - With the camera positions figured out, the process can now scan across all of the pixels of two (or more) images and mathematically compute where two cameras shooting rays (think of snipers looking through scopes) to that part of the image would intersect. Small corrections to best match what is actually found at those pixels help refine the depth. Doing this with thousands of data points, a very accurate model can be constructed, even with concavity (though not complete occlusions, like deep inside of a shoe).

Step Six - With a lot of data points known about the surface of the reconstructed geometry, typically a usable mesh model is automatically generated. It turns out that’s an entire science and art itself.

Step Seven - Since we have access to dozens of photos of the subject it’s a reasonable time to lift texture information. By pulling color swatches out of the photos, and wrapping them just right, we can have an automatically colored model as well as just the mesh geometry.

That’s the process and I’ve been very keen to try it since I first heard about it! At first I struggled with the software that people were using. I’ve tried several approaches and had a lot of problems with dependencies and crashes and compatibility issues. Finally recently I had a need/desire to have a model of my new ice skates. I wondered if I could give photogrammetry another try.

The first thing I did was load up a set of test images from the British Museum. It turns out that photogrammetry is used extensively to accurately document and study archaeological artifacts. In this case, a skull. I ran the skull image set through the software package Meshroom and was delighted that it finally works now! Here was the result I got.


The geometric detail, shown here in Blender, was amazing and the textured result was even more astonishing!


Excellent! Buoyed by such a great success I was ready to dive in! What I didn’t realize is that I now suspect that this skull data set is one of the ones carefully used in the testing of Meshroom’s default settings. My dreams of an easy win were quickly shattered.

I knew my subject was going to be a challenge. Photogrammetry really likes things with some good obvious texture features. Archaeological relics are usually perfect since they’re not usually completely smooth and new. Being scratched and dirty is a plus! Also photogrammetry becomes seriously confused by reflective or transparent materials. My subject had a lot of carbon fiber at the heel which on the one hand has a lot of texture, but it is also shiny and reflective. The reflections, which appear to move around the object’s surface, cause quite a bit of trouble. Also the top (fake) leather is polished to a high even gloss. I discovered that Velcro may actually be the worst material possible. To help facilitate the process I applied a bunch of little masking tape markers all over the skate. I also used 3m 658 Labeling & Cover-up Tape which is great stuff — like a whole roll of the sticky part of PostIt notes. If you can’t add markers or clean it up with post processing edits, that’s a challenge. You can actually buy spray cans of stuff that apply an opaque powder on objects to achieve better results with photogrammetry; it then disappears in a few minutes! Yes, that’s a real thing! These markers didn’t adversely affect my goals and clearly helped a lot with the process.

A few months back, I had purchased a new camera for projects just like this. This photo shows my new camera (a mirrorless Canon EOS M50m2, very nice) and the abysmal photo quality of my stupid telephone.


Thinking that just feeding the software some high quality images would solve all the problems, I shot 100 or so crystal clear 6000x4000 images of my subject. And…. Complete fail!


Since it took about 10 hours to run that, I decided to back off and downsample my images to 800x533. Amazingly this produced some kind of rough model in about 10 minutes.


Once I cleaned up some images with questionable views, I was able to get it to this.


Great! Ok, that is on the right path! I refined my setup and re-shot my images to make sure my set was very clean and my camera settings were perfect. To give you an idea of what these images are like here are thumbnails of my complete input set.


Unfortunately, I was still getting results like this with full resolution images.



Realizing that the problem seemed to be that the software was detecting features of my background I wondered if I could somehow magically remove the background. I shot some images of the setup without the subject and did exhaustive testing of ImageMagick’s compositing tricks. This is a summary of the subject as Src and the clean plate image as Dst.


Ultimately I never found a magic bullet for completely cleaning up the background automatically but this image is so useful for ImageMagick projects, it’s good to leave here for reference. I did come up with some helpful ImageMagick compositing tricks that I recorded on my photogrammetry notes.

As you can see with the ImageMagick efforts, I was in full berserker mode not really ready to take no for an answer - I was going to make this work! So I hand edited half of the 112 images. Yup. At least I got very quick at doing it. And here’s what I got.


Since that was decent and an improvement, I hand edited the rest of the 112. And here is the result of that.


Better. But still very frustrating since I have made my input as perfect as I think it is possible to make. That left only settings. Even though all sources strongly recommend using the highest resolution possible, I started to get the feeling that my input was just overpowering the process. Where a normal data set could be sure of a match if 200 features were identified, my high resolution set could come up with 200 spurious matched features in any random square centimeter of the background. By turning up the required feature matches to 750, I finally started to get results. Of course this sounds easy enough, but remember that each run of this high density data took 10 hours! Finally with the new setting I started to get some acceptable results.


It wasn’t perfect. Here is the resulting mesh in Blender highlighting all the unconnected geometry (which obviously can’t be right).


To give you an idea of how fine of a mesh we’re talking about, here’s a close up of one of the worst of these regions. The circles are Sharpie marks I put on the board to help with feature detection.


Here’s another look at the level of detail, this one showing the texture information applied.


Finally, here is the model created from 112 high resolution photos.


Yay! That is an amazing model, but actually not terribly useful. After picking up so much detail, I now needed to thin down the mesh to something hopefully still highly accurate, but less unwieldy to work with. Finally, using Blender modeling magic I was able to produce this usable model with a very modest polygon count (from 941k vertices and 1882k faces down to only 14k vertices and 16k faces).


It still looks great!


You can see that it looks like there are chunks missing from the model. Because there are. But I could actually clean that up by hand if I wanted to. But obviously you can see that I’ve sunk way, way, way more effort into this model than was strictly necessary for practical use. This was really a learning exercise and there was no point to me re-sculpting the mesh to clean up defects that reflect the limitations of the photogrammetry process. I have a perfectly usable mesh for my original purpose. I’ve learned a lot. And the process and results are incredibly cool!


For older posts and RSS feed see the blog archives.
Chris X Edwards © 1999-2021