Chris X Edwards

--------------------------

Compiles... Runs... Works!! OMG!!!

2015-07-25 11:45

After studying OpenCV for so long it was very depressing to get stuck on the starting line by orthogonal v4l codec issues. Finally I prevailed and was able to write a C program that takes raw input off of my PlayStation Eye web cam and lets me process and/or save it.

Often in programming crossing the starting line is half the journey.

To celebrate I made a funny animated GIF of that moment every serious programmer knows when some recalcitrant code finally compiles and works.

This was done using my aforementioned capture program plus avconv, and Imagemagick (documented in my notes of course).

Hover over the image to animate or click here.

What a relief! I am so ready to move on!

Defense Against The Dark Arts

2015-07-21 07:30

In my last post I pointed out that trusting a big company’s cloud service is no worse than trusting the same big company’s locally run software. Assuming a situation where we might wish to not fully trust anyone, an astute reader asked about the implicit trust we give to our hardware manufacturers.

The specific concern was that a company like Intel, ARM, or AMD could subvert physical CPUs to unnaturally cooperate with an attacker. I immediately thought of a system where a magic value stored in memory or a register triggered arbitrary execution or privilege escalation. I also thought of subverting the PRNGs as a likely target for this kind of attack. I think such a thing is definitely possible. There are many good resources about cpu backdoors that would corroborate such a belief. This Wikipedia article on shenanigans involving Dual Elliptic Curve randomness generators and the NSA makes it pretty clear that this isn’t the kind of threat that’s in the same category as, say, aliens from space beaming thoughts into your head which make you "accidentally" delete your PowerPoint slides.

I would personally say that the reason this attack is unlikely to be widely problematic in the wild at this time is that there are so many much easier ways for dedicated attackers to compromise systems. But imagine a world where everyone goes Stallman and insists on a certain level of maximal transparency. (Uh, let’s not dwell on Ken Thompson’s issue of trusting trust - let’s assume, like Stallman and Thompson, we can write everything in op codes from scratch.) The opacity of the hardware layer still would pose a problem. What could possibly be done to ameliorate this class of threat?

I can think of two things. The first is pretty obvious - carefully check stuff. I think this is one of the reasons why a poorly executed hardware attack would be doomed. Someone somewhere would have some weird use case that gets the "wrong" answer. They would wonder, they would post about it, and it would work its way up to security researchers who would delight in isolating the problem. We saw how this would work with a simple but subtle error in mid 1990s Intel CPUs. But as sophistication goes up, mechanisms to obscure such replay attacks (against the hardware exploit) can be imagined.

With that in mind and the fact that pretty much any hardware can be subverted (memory, motherboard bridges, bus controllers, ethernet controllers, et al.) defending against this kind of thing is no small problem. My second approach would be to use a distributed VM. Is this wildly complex? Yes. Practical? Probably not. Completely effective? I don’t think that’s possible really. But it could add so much entropy into what was happening at a low level to produce the genuine results you really wanted that low level corruption of the transistor logic simply is not a good attack. I feel like a misbehaving CPU would simply cause errors for a distributed VM system more than it would successfully attack the user level applications. This might suffice for a denial of service attack. Of course, I could be flagrantly wrong about this and it’s already rather impractical anyway.

Without much more to say, I’ll conclude with a link to a video of Jeri Ellsworth making a batch of microchips in her kitchen. And for the rest of us, a nice instructional video on making very stylish tin foil hats. Aluminum foil actually; tin forms highly toxic stannanes. Which is a reminder that there’s always something out to get us!

Partly Cloudy Locally

2015-07-15 08:36

Every month (since at least 10 years ago), I have read Bruce Schneier’s CRYPTO-GRAM newsletter. If you’re a security professional of any kind the only excuse not to be doing this is if you already know everything he writes about and it’s pretty safe to assume you don’t. With this in mind, it’s not every day that hubris gets the better of me such that I am ready to completely repudiate Schneier’s wisdom on a rather important and topical security issue. However, today is that day.

In a series of articles in the Economist, Schneier asks the question "Should Companies Do Most Of Their Computing in the Cloud?" Since I am a bespoke cloud computing craftsman you may think my arguments are similar to the normal ones that the "non-cloud" partisans support. (Which Bruce competently covers in the articles.)

No. Not at all. I’m actually pretty sympathetic to cloud advantages. As you’ll see, it’s probably better than a normal local set up. In this entire debate, I feel that both sides (cloud is good/could is bad) have largely missed the most glaring and important security issue. Interestingly I’ve felt this way for nearly 20 years, since "cloud" was still a weather feature. With the exception of a negligible number of insane people I’ve never found anyone who seems to have given my perspective any thought at all. That’s why I feel it might be good to clearly state my personal rule of cloud security.

If you can not audit the software you use for privileged tasks and you connect that system to the internet then your system is potentially as insecure as possible.

I’m not quibbling with a detail here. Bruce Schneier is wrong. Let me demonstrate the absurdity of the current argument by using Schneier’s own computer habits. Here Bruce provides a pretty bog standard run down of "cloud is bad" thinking.

In contrast, I personally use the cloud as little as possible. My e-mail is on my own computer — I am one of the last Eudora users — and not at a web service like Gmail or Hotmail. I don’t store my contacts or calendar in the cloud. I don’t use cloud backup. I don’t have personal accounts on social networking sites like Facebook or Twitter. (This makes me a freak, but highly productive.) And I don’t use many software and hardware products that I would otherwise really like, because they force you to keep your data in the cloud: Trello, Evernote, Fitbit.

My cloud computing avoidance closely follows his. The problem here is that if you think it’s important to improve security by doing things this way you can not use an operating system like Microsoft Windows. (Or OS X). If you use a proprietary operating system you have completely failed at the objective of not trusting the exact same companies that you would need to trust to use the cloud. Notice I’m not advocating for one thing or the other. I’m just pointing out that the security concerns about trusting the cloud are nothing new. If you didn’t feel the need to scrutinize your dependence on proprietary software, then congratulations! You don’t need to worry about cloud security either. It couldn’t possibly be worse.

Interestingly Schneier knows he’s wrong. In the same CRYPTO-GRAM he quotes, without argument, Micah Lee who points out what has always been obvious to me.

Whatever you choose, if trusting a proprietary operating system not to be malicious doesn’t fit your threat model, maybe it’s time to switch to Linux.

We can defer the question of "should we trust proprietary OS vendors to not compromise users' security in unwanted ways?" (The quick answer is no and no and no and OMGNO !!!)

If you can’t trust Azure to safely do whatever you want done with your data, you can’t trust Windows itself for the exact same reasons. But it’s not merely an equivalent threat. Your Windows (or OS X) non-cloud local system is worse against the threat of a compromised or untrustworthy service provider. Yes worse. The reason is the same answer to why criminals would much rather break into my Linux cluster to mine BTC than to physically break into the supercomputer center and haul it away in a truck. Why would the perpetrator want to pay for electricity, hardware, facilities, etc? If the NSA wanted all data, it’s far more efficient for them to let you host it. All they would need is a key to pop in to your computer at any time. Are you sure there is no back door on your computer? For me, that’s a theoretically testable hypothesis. I fear that for Schneier it’s magical thinking.

Review: Seveneves

2015-07-03 14:39

I just finished Neil Stephenson’s latest book, Seveneves. While I completely sympathized with some of the unenthusiastic reviewers, I loved this book nonetheless. How much did I love it? Well, I finished it 3 days after checking it out which is remarkable given that it is 867 pages long. I would say that if you liked The Martian (which I’ve already reviewed) so much you wish it was about triple the size, then you’ll love the first two thirds of Seveneves. If you’re also a Neil Stephenson fan, you’ll like the final third too. And if you’re a huge Neil Stephenson fan like I am, you’ll hope he’s working on a 1000 page sequel. As many unhappy reviewers point out, this is not so much a book about emotional human interactions and believable psychology as it is about engineering and, literally, rocket science. If you find complex technical solutions to problems exciting and interesting, this book is a delight. Even if you’re not so into the technical aspect per se, the rigor and competence of Stephenson’s conceptions help to build a terrifically coherent and believable world.

Code Reuse Not Considered Unharmful

2015-06-17 14:58

Let’s not worry about my views on software engineering fads and frameworks because I’m nobody special. But I was pleased that I’m not the only one with the following opinion.

…almost everything I’ve ever heard associated with the term "extreme programming" sounds like exactly the wrong way to go…with one exception. The exception is the idea of working in teams and reading each other’s code. That idea is crucial, and it might even mask out all the terrible aspects of extreme programming that alarm me.

I also must confess to a strong bias against the fashion for reusable code. To me, "re-editable code" is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you’re totally convinced that reusable code is wonderful, I probably won’t be able to sway you anyway, but you’ll never convince me that reusable code isn’t mostly a menace.

— Donald Knuth

I agree with that exactly, right down to the pair programming.

The topic of reusable code is brilliantly addressed in this great article. I definitely will waste time changing parameters over to a dict and iterating over their keys when just having a few parameters get the same (cut/paste) treatment would have been fine. But my sensitivity to that is nothing compared to my distrust of external dependencies. It’s not just that I have been burned so many times, but in so many different ways - code doesn’t quite work, error bombs, tiny feature quadruples the size of the project, requires its own exotic dependencies, takes longer to learn/understand than reimplementing, security issues, lack of flexibility, poor performance, license/owner changes, ad nauseum. Being lazy and using other people’s code can be a lot of work!

--------------------------

For older posts and RSS feed see the blog archives.
Chris X Edwards © 1999-2015