In my last post I pointed out that trusting a big company’s cloud service is no worse than trusting the same big company’s locally run software. Assuming a situation where we might wish to not fully trust anyone, an astute reader asked about the implicit trust we give to our hardware manufacturers.

The specific concern was that a company like Intel, ARM, or AMD could subvert physical CPUs to unnaturally cooperate with an attacker. I immediately thought of a system where a magic value stored in memory or a register triggered arbitrary execution or privilege escalation. I also thought of subverting the PRNGs as a likely target for this kind of attack. I think such a thing is definitely possible. There are many good resources about cpu backdoors that would corroborate such a belief. This Wikipedia article on shenanigans involving Dual Elliptic Curve randomness generators and the NSA makes it pretty clear that this isn’t the kind of threat that’s in the same category as, say, aliens from space beaming thoughts into your head which make you "accidentally" delete your PowerPoint slides.

I would personally say that the reason this attack is unlikely to be widely problematic in the wild at this time is that there are so many much easier ways for dedicated attackers to compromise systems. But imagine a world where everyone goes Stallman and insists on a certain level of maximal transparency. (Uh, let’s not dwell on Ken Thompson’s issue of trusting trust - let’s assume, like Stallman and Thompson, we can write everything in op codes from scratch.) The opacity of the hardware layer still would pose a problem. What could possibly be done to ameliorate this class of threat?

I can think of two things. The first is pretty obvious - carefully check stuff. I think this is one of the reasons why a poorly executed hardware attack would be doomed. Someone somewhere would have some weird use case that gets the "wrong" answer. They would wonder, they would post about it, and it would work its way up to security researchers who would delight in isolating the problem. We saw how this would work with a simple but subtle error in mid 1990s Intel CPUs. But as sophistication goes up, mechanisms to obscure such replay attacks (against the hardware exploit) can be imagined.

With that in mind and the fact that pretty much any hardware can be subverted (memory, motherboard bridges, bus controllers, ethernet controllers, et al.) defending against this kind of thing is no small problem. My second approach would be to use a distributed VM. Is this wildly complex? Yes. Practical? Probably not. Completely effective? I don’t think that’s possible really. But it could add so much entropy into what was happening at a low level to produce the genuine results you really wanted that low level corruption of the transistor logic simply is not a good attack. I feel like a misbehaving CPU would simply cause errors for a distributed VM system more than it would successfully attack the user level applications. This might suffice for a denial of service attack. Of course, I could be flagrantly wrong about this and it’s already rather impractical anyway.

Without much more to say, I’ll conclude with a link to a video of Jeri Ellsworth making a batch of microchips in her kitchen. And for the rest of us, a nice instructional video on making very stylish tin foil hats. Aluminum foil actually; tin forms highly toxic stannanes. Which is a reminder that there’s always something out to get us!