Chris X Edwards

Just realized I've never heard of autonomous vehicle AI training on what gamers call force feedback. Gamers prove it's obviously important.
2018-01-22 11:19
The AI overlords will take over and destroy humanity when the Black Eyed Peas write a song called, "Hey Google, Turn Up The Volume."
2018-01-21 20:14
I never expected to see this phrase: "Ferrari SUV". Wait... Lamborghini too? Well, why not? They'll mostly get driven in video games anyway.
2018-01-18 11:37
Lots of people smoking weed down at the beach this morning. No more than normal. Let's hope the same holds for drivers.
2018-01-17 09:31
Heard some of a speech by that idiot, Ronald Reagan. Today he sounds sane, decent, and even thoughtful and intelligent. By comparison.
2018-01-10 11:53
Etc.
--------------------------

Reveiw: Fantasyland

2018-01-21 19:17

This.

That would be my satisfactory one word review if I liked this book. But I loved this book, so I want to say more about it.

America is batshit crazy. That is the premise of Kurt Andersen’s Fantasyland: How America Went Haywire: A 500-Year History. Note that haywire is the premise, haywire is the starting fact. If you’re good with that, then this book walks you through the entire history of the USA as seen through the lens of all that crazy which has, according to the book, been building up in the collective soul of the nation like Alzheimer’s plaques aggregating in Ronald Reagan’s neurons.

For over 400 dense but expertly crafted pages, Andersen fires away with astonishing historical trivia clearly illustrating that the USA has been doing some idiosyncratically wacky stuff ever since before it officially existed. He goes into detail about the kinds of people who, throughout history, were motivated to come to the New World and/or move west in it: get-rich-quick gold dreamers and religious nuts basically. Wave after wave of them.

He does a nice job of showing how the USA is some kind of weird hybrid offspring of the Protestant Reformation and the Enlightenment. Protestantism basically was the rejection of the more inconvenient wacky crazed superstitious nonsense promulgated by God’s special man in Rome. But once that door was open two things were possible. First, the Enlightenment where some of those less credulous believers started to take things to the next level: well, if papal indulgences, etc., are a load of pope-serving nonsense, maybe some of the other crazy stuff the religious leaders say is clergy-serving nonsense too. In fact, thought many influential Enlightenment thinkers, maybe the whole thing is a ridiculous con. So far so good with the Enlightenment.

But that same spirit of debugging dogma had another odd quirky direction it could take. Instead of asking "is all this legacy nonsense really necessary?" some newfangled Protestants were asking "shouldn’t we be adding more nonsense?" Not just something more extreme, but more wacky. The reason for this is that one of the Protestant key points was that special magical people (priests, cardinals, etc.) who stand between you and God are probably going to do a bad job. True enough! So the Protestants said, hey everyone should read the Bible themselves, interact with God, and interpret the resulting hallucinations in their own special way. In these times long before antipsychotic drugs, it seems some people really went crazy with this. The most obnoxious ones were treated like obnoxious people normally are and they finally felt enough pressure to leave civilization and the company of the not-so-obnoxious people. This was the prototypical American, the Puritan.

Of course moving to a hostile wilderness was no holiday and all the obnoxious people who were lazy and weak were weeded out. Additionally survivorship bias gave the colonists who did not die of indigenous diseases and hardships reason to double down on their belief in God’s providence. Eventually in the early days of white settlement in the American colonies, the population skewed towards obnoxious magical thinkers who were motivated, capable, eccentric, and lucky.

What about that Enlightenment? Wasn’t that doing some good? Sure. I didn’t say every early American was a witch burning religious nut. A lot of people leaned toward Christianity-lite deism which believed in the rough moral ideas of religion (it’s super uncool to kill thy neighbor) but not Christ’s putative fantastical magic acts.

For much of my life I have had the explicitly stated philosophy that the religion and crazy thinkings and beliefs of others are fine by me if for all intents and purposes they don’t affect me. That was how I conceptualized the limits of my religious tolerance. One of the main exemplars of the reduced magic deist way of thinking, Thomas Jefferson, was typical of the exact same American spirit of cautious latitude towards religious nuttiness. He had a more delightfully poetic way of putting it, "But it does me no injury for my neighbour to say there are twenty gods, or no god. It neither picks my pocket nor breaks my leg." With such an almost aggressively laid back attitude, the America project was officially started. Of course making such an attitude the literal Rule Number One gave great encouragement to people who were inclined to think up all kinds of crazy humbug. Maybe it’s the American side of me, but I am actually sympathetic to that; I feel the alternative of repression is worse.

Ideas find currency or they fade into obscurity. Richard Dawkins likened this process to biological evolution. Just as natural selection determines the outcomes of genes (a fact Andersen’s book points out is not believed by most Americans), Dawkins believed there was a cultural evolution for ideas, "memes" as he called them in this context. Back to the early USA and there are several mimetic possibilities to consider. First, perhaps the environment for ridiculous nonsensical memes was just better in America because of the temperament of Americans. Or, perhaps when there is a meme explosion under a regime of very free thinking, crazy memes are generated slightly more often. In a bubbling cauldron of absurd ideas, if you stoke the fire and get way more crazy ideas than other systems of governance, it is not axiomatic that the same cauldron will produce a commensurate amount of good sense that will keep the crazy stuff in check. There are lots of reasons that crazy American thinking could be exceptional.

Andersen’s book doesn’t get into that kind of analysis, but what it does do is catalog the entire history of all those wingnut memes which have had important and profound effects on the nation. And wow. It’s jaw dropping. And it’s not just batshit insanity that’s a problem here. Americans are super creative, make no mistake. Plausibly semi-sane American things like P.T. Barnum, Hollywood, Broadway, Vaudeville, the CIA, role playing games, novels, theme parks, modern Halloween, comics, paintball, advertising, sports, drugs, pornography, video games, etc., etc., purposefully do a stupendously good job of intermingling reality and fantasy.

This tendency to make fantasies as real as possible and the real as fantastic as possible is a quintessential American speciality. The imagination of Americans is awesome. The kind of awesome that puts dudes on the moon (or fakes it convincingly enough for me). The real point of the book is that when you have people busy working on making the mundane seem fantastic (e.g. all of advertising) and others working on making fantasies seem real (e.g. Hollywood) at some point they collide. The book makes a good case that we are seeing that now. Fake news. Reality TV. If you had to say if thing things on Facebook were ostensibly true or false, how would you even answer that?

Kurt Andersen isn’t some yokel. His writing is practiced and immaculate, yet quite lively and entertaining. It’s easy to imagine him graduating from Harvard (no longer pursuing its original mission to teach magical thinking) with honors (he did). Indeed, it is superlative prescience that he was so deep into writing this book when the apotheosis of the fantasy-reality chimera came to dominate our cultural bandwidth.

The reason this book is so topical and important is that so many people are looking at the state of the nation and basically asking, WTF? Americans couldn’t do any better than a guy like Donald Trump? Seriously? If you’re gobsmacked by the nature of the guy in office and trying to make sense of it, this book is a huge help. (I bet you are a bit in shock about the state of the nation if you’re literate enough to read a book!) It goes a long way towards explaining exactly just WTF.

And it’s not a neat tidy deal. We can’t just say that the bad people did some silly things and if we push back on that, it will all be good. How much sensational (false? false-ish?) awesomeness do you want in your news, for example? How much social media (useless depressing crap?) do you want? How much reality TV? How much celebrity? Well, the people are speaking on these issues and at this point cutting back to a more realism-based perspective isn’t looking too likely.

It’s not even enough to wish for "the truth" and a return to reality-based thinking. This new breed of Russian mail order presidents makes things very complicated. When Trump makes crazy statements that, in the real world, are patently false and gets called out for them, he simply claims that any criticism is "fake news". This reminds me a lot of logic puzzles where one guard always lies and one always tells the truth and you have to figure out which is which when one says "the other guard does not lie". Those puzzles are puzzling! Requiring the populace to constantly be thinking like that is a non-starter.

So we have a "leader" who is so entertaining (like Darth Vader) that he dominates the attention of the entire nation. Politics became entertainment, fantasy. And this government, now unmoored from reality, is a new realm. Fantasyland.

GPU Machine Learning And Ferrari Battle

2018-01-17 00:47

It used to be that if you were an exclusive Linux user (guilty!) gaming was pretty much not something you did. There just were, relatively, very few games for Linux. However, that list has been growing extremely quickly in recent years thanks to Valve’s SteamOS which is really a euphemism for "Linux".

With this in mind, some time ago (a couple of years?) I purchased an ok graphics card for my son’s gaming computer. Now I’m pretty thrifty about such things and I basically wanted the cheapest hardware I could get that would work and that would reasonably play normal games normally. As a builder of custom workstations for molecular physicists, I’ve had a lot of experience with Nvidia and hardware accelerated graphics. But it turns out that rendering thousands of spherical atoms in the most complex molecules is pretty trivial compared to modern games. So much so that for the workstations I build, I like to use this silent fanless card (GeForce 8400) which is less than $40 at the moment. Works fine for many applications and lasts forever. Here’s an example of the crazy pentameric symmetry found in an HPV capsid taken from my 3 monitors, reduced from 3600x1920, driven by this humble $40 card.

hpv_from3600x1920.png

But for games, it doesn’t even come close to being sufficient.

How do you choose a modern graphics card? I have to confess, I have no idea. I only recently learned that Nvidia cards had a rough scheme to how their model numbers work despite seeming completely random to me.

Eventually I purchased an Nvidia GeForce GTX 760. I thought it worked fine. Recently, my son somehow had managed to acquire a new graphics card. A better graphics card. This was the Nvidia GeForce GTX 1050 Ti. Obviously it’s better because that model number is bigger, right? My son believed it was better but we really knew very little about the bewildering (intentionally?) quagmire of gaming hardware marketing.

Take for example this benchmark.

passmark.png

Sadly they don’t show the GTX 1050, but based on the 1060 and 1070, you’d expect this card to be way better, right?

But then check out this benchmark which does include both. It’s better but not such a slam dunk. (Ours is the Ti version, whatever that means.)

futuremark.png

People often come to me with breathless hype for some marketing angle they’ve been pitched for computer performance and I always caution that the only way you can be sure it will have the hoped for value is if you benchmark it on your own application. You can’t blindly trust generic benchmarks which at best might coincidentally be unlike your requirements and at worst be completely gamed. Since I had these cards and I was curious to find out what the difference between GPUs really looked like, I did some tests.

Before we return to the point of the exercise, playing awesome games awesomely, let’s take a little hardcore nerd detour into another aspect of gaming graphics cards: the zygote of our AI overlords. Yes, all that scary stuff you hear about super-intelligent AI burying you in paperclips is getting real credibility because of the miracles of machine learning that have been, strangely, enabled by the parallel linear algebra awesomeness of gaming graphics hardware.

Last year I did a lot of work with machine learning and one thing that I learned was that GPUs make the whole process go a lot faster. I was curious how valuable each of these cards was in that context. I dug out an old project I had worked on for classifying German traffic signs (which is totally a thing). I first wanted to run my classifier on a CPU to get a sense of how valuable the graphics card (i.e. the GPU) was in general.

Here is the CPU based run using a 4 core (8 with hyperthreading) 2.93GHz Intel® Core™ i7 CPU 870.

Loaded - ./xedtrainset/syncombotrain.p Training Set:   69598 samples
Loaded - ./xedtrainset/synvalid.p Training Set:   4410 samples
Loaded - ./xedtrainset/syntest.p Training Set:   12630 samples

2018-01-16 19:02:54.260119: W tensorflow/core/platform/cpu_feature_guard.cc:45]
The TensorFlow library wasn't compiled to use SSE4.1 instructions, but
these are available on your machine and could speed up CPU
computations.
2018-01-16 19:02:54.260143: W tensorflow/core/platform/cpu_feature_guard.cc:45]
The TensorFlow library wasn't compiled to use SSE4.2 instructions, but
these are available on your machine and could speed up CPU
computations.

Training...
EPOCH 1 ... Validation Accuracy= 0.927
EPOCH 2 ... Validation Accuracy= 0.951
EPOCH 3 ... Validation Accuracy= 0.973
EPOCH 4 ... Validation Accuracy= 0.968
EPOCH 5 ... Validation Accuracy= 0.958
EPOCH 6 ... Validation Accuracy= 0.980
Model saved

Test Accuracy= 0.978

real    4m42.903s
user    17m31.120s
sys     2m28.476s

So just under 5 minutes to run. I could see that all the cores were churning away and the GPU wasn’t being used. You can see some (irritating) warnings from TensorFlow (the machine learning library); apparently I have foolishly failed to compile support for some of the CPU tricks that could be used. Maybe some more performance could be squeezed out of this setup but compiling TensorFlow from source code doesn’t quite make the list of things I’ll do simply to amuse myself.

Oh, and my software can indeed identify which German traffic sign it’s looking at 98% of the time which is pretty decent.

Next I installed the version of TensorFlow that uses the GPU.

conda install -n testenv tensorflow-gpu

Now I was running it on the card that Linux reports as: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] (rev a1).

Loaded - ./xedtrainset/syncombotrain.p Training Set:   69598 samples
Loaded - ./xedtrainset/synvalid.p Training Set:   4410 samples
Loaded - ./xedtrainset/syntest.p Training Set:   12630 samples

2018-01-16 20:12:40.294673: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955]
Found device 0 with properties:
name: GeForce GTX 1050 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.392
pciBusID 0000:01:00.0
Total memory: 3.94GiB
Free memory: 3.76GiB
2018-01-16 20:12:40.294699: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0
2018-01-16 20:12:40.294713: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0:   Y
2018-01-16 20:12:40.294726: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045]
Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0)

Training...
EPOCH 1 ... Validation Accuracy= 0.920
EPOCH 2 ... Validation Accuracy= 0.937
EPOCH 3 ... Validation Accuracy= 0.975
EPOCH 4 ... Validation Accuracy= 0.983
EPOCH 5 ... Validation Accuracy= 0.971
EPOCH 6 ... Validation Accuracy= 0.983
Model saved

2018-01-16 20:13:18.767520: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045]
Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0)
Test Accuracy= 0.984

real    1m7.441s
user    1m5.344s
sys     0m5.452s

You can see that it found and used the GPU. This took less than a quarter of the time that the CPU needed! Clearly GPUs make training neural networks go much faster. What about how it compares to the other card?

One caveat is that I didn’t feel like swapping the cards again, so I ran this on a different computer. This time on a six core AMD FX(tm)-6300. But this shouldn’t really matter much, right? The processing is in the card. That card identifies as: NVIDIA Corporation GK104 [GeForce GTX 760] (rev a1). Here’s what that looked like.

Loaded - ./xedtrainset/syncombotrain.p Training Set:   69598 samples
Loaded - ./xedtrainset/synvalid.p Training Set:   4410 samples
Loaded - ./xedtrainset/syntest.p Training Set:   12630 samples

2018-01-16 20:13:57.953655: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940]
Found device 0 with properties:
name: GeForce GTX 760
major: 3 minor: 0 memoryClockRate (GHz) 1.0715
pciBusID 0000:01:00.0
Total memory: 1.95GiB
Free memory: 1.88GiB
2018-01-16 20:13:57.953694: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0
2018-01-16 20:13:57.953703: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0:   Y
2018-01-16 20:13:57.953715: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030]
Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 760, pci bus id: 0000:01:00.0)

Training...
EPOCH 1 ... Validation Accuracy= 0.935
EPOCH 2 ... Validation Accuracy= 0.953
EPOCH 3 ... Validation Accuracy= 0.956
EPOCH 4 ... Validation Accuracy= 0.976
EPOCH 5 ... Validation Accuracy= 0.971
EPOCH 6 ... Validation Accuracy= 0.979
Model saved

2018-01-16 20:14:43.861117: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030]
Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 760, pci bus id: 0000:01:00.0)
Test Accuracy= 0.977

real    1m0.685s
user    1m10.636s
sys     0m7.164s

As you can see, this is pretty close. I certainly wouldn’t want to spend a bunch of extra money on one of these cards over another for machine learning purposes. So that was interesting but what about where it really matters? What about game performance?

This is really tricky to quantify. Some people may have different thresholds of perception about some graphical effects. Frame rate is an important consideration in many cases, but I’m going to assume that 30 frames per second is sufficient since I’m not worrying about VR (which apparently requires 90fps). My goal was to create the setup most likely to highlight any differences in quality. I created two videos, one using each card on the same computer, and then spliced the left side of one to the right side of the other.

This video is pretty cool. In theory, it is best appreciated at 1920x1080 (full screen it maybe). Locally, it looks really good but who knows what YouTube has done to it. Even the compositing in Blender could have mutated something. Even the original encoding process on my standalone HDMI pass-through capture box could have distorted things. (This standalone capture box does produce some annoying intermittent artifacts like the left of the screen at 0:15 and the right at 0:21 — this is the capture box and has nothing to do with the cards.) And of course if you’re using Linux and Firefox you probably can’t see this in high quality anyway (ahem, thanks YouTube).

So that’s video cards for you. What may look like like hardware models with an obvious difference may not really have much of a difference. Or they might. In practice, you need to check them to really be sure. If you noticed any clear difference in the two video sources, let me know, because I didn’t see it. Frame rates for both were locked solidly at 30fps.

Speaking of incredibly small differences, how about those two laps around the Monaco Grand Prix circuit? I drove those separately (in heavy rain with manual shifting) and the driving is so consistent that they almost splice together. I’ve enjoyed playing F1 2015. This is the first time Linux people could play this franchise. The physics are as amazing as the graphics. What is completely lame, however, are the AI opponents (too annoying to include in my video). Wow they are stupid! Computer controlled cars… a very hard problem.

My Next Car

2018-01-16 19:51

car.jpg

Some random guy on the internet offered to send people who wrote to him this bumpersticker. And I did and he did and thanks Ben!

Blender The Beast

2017-12-08 13:00

The first computer program I ever saw run was a 3d graphical virtual reality simulation which was as immersive as any I’ve ever experienced. What is really astonishing is that this took place in 1979 and the program was loaded into less than 48 kilobytes of RAM from a cassette tape. Yes, a cassette tape.

That program, called FS1, was written by a genius visionary named Bruce Artwick. Very soon after my dad and I saw that demonstration, we were among the first families to have a computer in our home. Of course we loved Flight Simulator, as it is better known. But there’s some even more obscure ancient history hiding in there.

Not long after that, Artwick’s company Sublogic released a program called A23D1. You can find an ancient reference to it in the March 1980 edition of Byte magazine. It simply says, "A23D1 animation package for the Apple II ($45 on cassette, $55 for disk)." That is all I can find to remind myself that I wasn’t just dreaming it.

Although Flight Simulator was jaw droppingly spectacular, I almost think that A23D1 was even more historically premature. It was nothing less than a general purpose 3d modeling program and rendering engine. Remember, this was for the 8-bit 6502 processor with 48kB of RAM.

Of course we’re not talking about Pixar level of polish, but in 1980 seeing any 3d computer graphics was nearly a religious experience. I think it would be hard to understand the impact today. It was like looking through a knothole in the fence between our reality and the magical land of the fantastic. Remember at this time the only 3d graphics anybody had ever seen were on Luke’s targeting computer and, as dorky as those graphics look today, at the time we walked out of the theaters no less stunned than if we’d just returned from an actual visit to a galaxy far, far away.

I remember my dad getting out the graph paper and straining his way through the severe A23D1 manual until, many hexadecimal conversions later, he had created a little sailboat in a reality that had not existed before in our lives. To see a window to another universe in our house, tantalizingly under our control, was mind-blowing. These were the first rays of light in the dawn of virtual reality.

I think A23D1 overreached a bit. It was not truly time for 3d. I spent my high school years absorbed by the miraculous new 2d "paint" programs. When I landed my first gig as an engineering intern for a metrology robot company, they had a copy of AutoCAD. I don’t know exactly why because nobody used it or even knew how. I was drawn to it immediately. There was no mouse (yes, the AutoCAD of 1988 had a keyboard-only mode which was pretty commonly used) and the monitor was monochrome. I started systematically building expertise. I eventually learned how to model things in 3d and how to write software in AutoLisp (apparently a direct contemporary of EmacsLisp).

AutoCAD formed the basis of a pretty good engineering career for me. The problem was that I was pushing the limits of what AutoCAD was designed for. I constantly struggled with the fact that (1990s) AutoCAD’s 3d features were roughly bolted on to an earlier 2d product. The expense of AutoCAD was tolerable for a business but not for me personally. As AutoCAD moved away from any kind of cross-platform support, the thought of using it on a stupid OS filled me with dread. As a result of the dark curse of proprietary formats I found myself cut off from a large body of my own intellectual work.

That’s the background story that helps explain why I thought it might be best if I recreated AutoCAD myself from scratch. I was kind of hoping the free software world would handily beat me to it, but no, my reasons are still as good as ever to press on with my own incomplete geometric modeler.

But it is incomplete. And that has been a real impediment for someone like me who is so experienced with 3d modeling. A few years ago, I was making some videos and having trouble finding free software that was stable enough to do the job. I eventually was directed to Blender and I was impressed. I have done a lot of video editing now with Blender (email me for a link to my YouTube empire if you’re interested) and it has never let me down. Blender has a very quirky interface (to me) but it is not stupid nor designed for stupid people. After getting a feel for it I started to realize that this was a serious tool for serious people. I believe it is one of the greatest works of free software ever written.

My backlog of 3d modeling projects has grown so large that I decided to try to get skilled at using Blender at the end of this year. I have envisioned a lot of engineering projects that just need something more heavy duty than what my modeling system is currently ready for. I also think that my system can be quite complimentary to something like Blender.

The problem with Blender for me is that it is a first class tool for artists. But for engineering geometry, I find it to be more of a challenge. My system on the other hand is by its fundamental design the opposite. One of the things that would always frustrate me with bad AutoCAD users (which is almost all of the ones I ever encountered, and if you’re an exception, you’ll know exactly what I mean) is that they often would make things look just fine. This is maddening because looking right is not the same thing as being right. Blender specializes in making things look great. Which is fine but when I start a project I usually have a long list of hard numerical constraints that make looks irrelevant. I’m not saying Blender is incapable; the fact that there’s a Python console mode suggests that all serious things are more than possible with Blender.

But I get a bit dispirited when I go looking for documentation for such things and turn up nothing. Even for relatively simple things this is all too common.

blender_docs.png

Since I’ve just had such a great experience with on-line education I thought maybe there was some such way to learn Blender thoroughly. And there is! I’ve been going through this very comprehensive course from Udemy. I’m about half way through it and it basically provides a structured way to go through most of the important functionality of Blender while getting good explanations and plenty of practice.

Here’s an example of a stylish low-poly chess set I created.

chess.png

Not that exciting but a good project to get solid practice with.

With AutoCAD I remember writing all my own software to animate architectural walk-throughs and machine articulation simulations. Obviously Blender comes with all of that refined for a professional level of modern 3d animation craftsmanship. Here’s a quick little animation I did which was not so quick to create, but very educational.

xedlamp.gif

Rendering this tiny thing I learned that Blender is the ultimate CPU and GPU punisher. Simultaneously! If you want to melt your overclocked gaming rig, I recommend Blender.

The reason I think it’s wise and safe to invest so heavily in Blender is that this rug will never be pulled out from under me. I can’t afford AutoCAD so that door is slammed in my face. Blender, on the other hand, is in the public domain. I even have access to the source code if there’s something I don’t like. No excuses.

I hope I can integrate it with the more engineering oriented geometry tools I have written. I am confident that I can use it to start design work on my own autonomous vehicles and to generate assets for vehicle simulations in game engines.

Blender is a fun program. It is heroically cross-platform. You can just download it from blender.org. If you can’t get inspired by the awesome artwork people have created (e.g.) you’re probably pretty dull. While there is a lot to it, the rewards are commensurate. If you have ever used A23D1, Blender is well within your capabilities. The same is true if you have ever run a virtual fashion empire designing and selling virtual skirts to virtual people. In fact, if that describes you, I would highly recommend you pay the $10 for this Udemy course and get to it!

Patently Ridiculous

2017-12-06 13:33

Years ago I tried to talk some sense about what I feel are overblown fears of scary AI enslaving humanity. In that, I pointed out The Economist pointing out that we’ve been here before. They mention that "government bureaucracies, markets and armies" have supernatural power over ordinary humans and must be handled with care. A new article expands on that theme nicely; the short version is entirely captured by the title, "AI Has Already Taken Over, It’s Called the Corporation".

In my aforementioned post I proposed my own idea that AI wouldn’t be much of a concern because if it was truly intelligent, it wouldn’t care about humans one bit. Sort of like we don’t go around worrying about diatoms even though they’re pretty awesome and vastly outnumber us.

If escaping from the scary menace of SkyNet AI involves, essentially, obscurity, maybe the same is true with the ominous spectre of corporations. For example, USC §271(a) says pretty clearly that, "…whoever … makes, [or] uses … any patented invention… infringes the patent."

Let’s say I’m pursuing a research agenda to accelerate autonomous car technology. If I work for a big company, patents provide a guide to what must be treated as forbidden. If I avoid such entanglements and work by myself, patents can be stolen as expedient with complete disregard to the law. Probably. So I got that goin for me, which is nice.

And with all that in mind, let’s turn now to autonomous car news of the weird. This article in Wired talks about some random engineer dude with an interest in autonomous car company lawsuits. Sort of like me but, apparently, with a bit more disposable cash. If you’ll recall, I wrote about the extremely bizarre testimony in the Waymo v. Uber lawsuit here and here.

This random engineer guy, Eric Swildens, was watching the circus too and he started to get the feeling that the whole case Waymo was presenting was kind of weak for no other reason than the putative infringed upon patent was kind of stupid. Sure enough, he does some minor digging and finds out that there’s prior art and yadayada Waymo’s case is embarrassing and Uber’s defense oversight maybe more so. If any of that sounds interesting, do check out the whole article which is surreal.

But here’s the thing… In those depositions, Waymo seemed pretty pissed off at their man switching teams and taking some tech (and enough bonus pay to start a cult). I thought the technology involved was trade secret stuff. There was all this talk about what was checked out of the version control and who had what hard drive where, etc. But am I to understand that all of this was really about a specific patent which can be accessed by anybody with a web browser (made easy by Google no less)? Something doesn’t make sense.

Whatever. Thanks to the magic of the Streisand effect, I am cheerfully reading through all Waymo’s patents.

--------------------------

For older posts and RSS feed see the blog archives.
Chris X Edwards © 1999-2017