Hunt for the kill switch in microchips

Discuss

46 Responses to “Hunt for the kill switch in microchips”

  1. Nelson.C says:

    Is it feasible to x-ray scan a suspect chip and compare it to a known safe chip? For instance, with the PPC750 above, there are (presumably) safe versions of the chip; any subverted chip would have a different pattern of circuits. You don’t need to scan every chip, just scanning a sample would reveal if a manufacturer is supplying duff chips.

  2. nehpetsE says:

    oh well..
    I guess its back to glass vacuum tubes and brave all-volunteer “Victory Pilots” flying inside the ICBMs to guide them to their destination.

  3. Hylst says:

    Is it really followed ?
    How can seriously the department of defense have a control on that all ?

  4. Nelson.C says:

    Spork @29: But the EW systems are designed assuming that they’re built with “honest” hardware. If a chip has a bit of malware encoded into it, it’s essentially a sleeper agent in the plane’s network; all fine until it gets the code signal, then it starts subverting the dataflow in the network and working with other sleeper agents to corrupt the network completely. The EW systems are designed to protect against external threats, not to monitor their own condition.

  5. Seb says:

    @ #20 “Is it feasible to x-ray scan a suspect chip and compare it to a known safe chip?”

    Not sure about X-raying, but there are labs in a variety of countries that can section and analyse microelectronic circuits to determine the device’s construction and functionality.

    This technique can be used to identify chips which have had their part numbers removed as an IP protection scheme. I presume it would also be possible to discover any malicious circuitry, as is the task of the contractors mentioned in the article.

  6. Michael R. Bernstein says:

    I’m kind of surprised no-one has noted the connection to this paper, ‘Designing and Implementing Malicious Hardware’:
    http://www.usenix.org/event/leet08/tech/full_papers/king/king_html/

  7. Marcel says:

    You know, rumor has it, that some of the air molecules you breath right now, have possibly once started life as a malicious fart which escaped somebody’s lower backside in Beijing.

    (choke!)

  8. Falcon_Seven says:

    The concern by the Pentagon over the ‘off-shoring’ of microchip production has been a topic that has been discussed in the semiconductor industry, ad-nauseum, for the last thirty years. There was even talk of forming a consortium of manufacturers to produce all the chipmaking equipment here in the U.S. to insure the ‘integrity’ of the parts and to prevent the transfer of technology to the Japanese, of all people. They didn’t know what to do about it then; they cannot possibly do anything about it now.

  9. Beryllium says:

    On top of all the security concerns, they haven’t even begun looking for the potentially embarrassing chinese equivalent of “all your base” stamped onto the circuits for shits and giggles :)

  10. Takuan says:

    of course they have kill switches. This is the first thing any country will do once they have the expertise. I also am quite sure that all the avionics of any American fighter jets sold to countries like Pakistan also have Pentagon mandated kill switches or similar engineered in vulnerabilities.

  11. Billy Blight says:

    “Flatlander woman”

    or, perhaps,

    “Laputan machine”.

  12. GregLondon says:

    Gilbert: Avionic systems are like that: they just won’t respond to anything they don’t understand. There is no such thing as a “Kill Yourself” commmand and as such an enemy cannot issue one.

    It might be possible to mess up something like a radar system or a radio system. The chip could have circuitry added that watches the incoming signal data go by. When it sees some predetermined pattern, it could simply shut off the clocks to the chip or force it into a reset. put a fuse in the circuit to force it to stay that way, even after a power reset, and you’re screwed.

    Commercial asics are using fuses now to deal with large internal memories. Rather than have a bad bit in a memory mean that you have to toss the whole chip, there are some fuses on board that you can blow that swap out a row or column into a new row/column. And then the chip never knows the difference.

    all you’d have to do is fuse the reset in the chip so that when you blow the fuse, it’s stuck in reset forever.

    It would be easy to do. And it would be pretty hard to detect in verification. Whether or not anyone would actually try doing it on a large scale, and whether or not it is worth the money to try to protect against it, is a different story.

    Another way would be to have the logic triggered by a time-based circuit. after some time, date, year, blow the fuse and the fleet is grounded. The way the process works for analysis, it would take months or years to fix.

    But there plenty of ways you could do something like this that would completely shut down an entire system.

    One of the things about the fly-by-wire system that I worked on was that it used three *different* processors made by three *different* vendors. All three were running the exact same software, and all three were built into a system where all three ran in lock step with each other. Each processor calculated its results. Then the system on the board would do a majority vote and ignore the bad result.

    The idea was that if one of the processors screwed up a multiply or something, then the odds were the other two probably woudn’t have the exact same bug.

    Worst case, triplicate all your logic, have three different systems designed and fabbed by three different vendors, countries, etc, and then it would be harder to take down a system.

    Cost goes through the roof, though.

    But it could be done.

  13. BSD says:

    Turing says “Sorry, no.”

  14. Ernst Gruengast says:

    This is about military defence against China. The fear is that that with these backdoors, the chinese can bring down the US military. Three points:
    • China is spending inordinate amounts of its military budget on cyber-defence R&D
    • the recent Seagate harddrive spat was exactly this and was traced to chinese-made chips.
    • chinese hacking of german and US defence networks tells us they’re in there working away
    Read this:
    http://www.willthomasonline.net/willthomasonline/Command_Override.html
    http://www.willthomasonline.net/willthomasonline/Verification.html
    I ain’t necessarily so, but it’s the kind of scenarios they’re thinking of.

    Yours respectfully,

    Ernst

  15. Spork says:

    Ok, lets assume for a second that these chips do contain some kind of “kill switch”. How exactly is the enemy supposed to execute the code required to activate this switch? Or are we assuming that the manufacturers of the chip, all it’s next highest assemblies, the systems integrators, and any subsequent mods teams are all in on this? It sounds pretty unlikely.

  16. jimkirk says:

    Personally, I think EVERYTHING should have a self-destruct mechanism.

  17. airship says:

    It would be the ultimate irony if our country was militarily defeated and overrun by the Chinese simply by them turning off all the chips in our multi-trillion dollar war machine.

    That those chips were made in China because the paranoids who created our bloated military were the selfsame greedy bastards who drove our infrastructure overseas just so they could line their own pockets… well, that’s cosmic justice, now, isn’t it?

  18. scottfree says:

    Public health would be too bureaucratic. The military, however, fairly straightforward…except you don’t even know who’s making the stuff you use to kill people!!!

    They’ll be telling us a senator twenty years ago ordered a clone army and forgot to tell anyone about it next.

  19. Moon says:

    Why do I get the feeling that some Pentagon General’s brother-in-law runs a chip testing facility?

  20. Ben says:

    If they have a kill switch, won’t you have to move a complicated set of levers into bizarre positions, recite your access code, and then wait fifteen minutes while the computer (in a female voice) counts down?

    “You have three minutes to self destruct.”

    Meanwhile, did somebody order a bunch of clones? I can’t seem to find the purchase order, but they were just dropped off at loading dock ‘G’

  21. madsci says:

    I have trouble imagining how this could be practical for any but the most specialized of components. I would imagine something like the F-35 uses mostly commodity parts – processors and FPGAs, for example. Parts that can do millions of different tasks.

    I don’t have any specific knowledge of military avionics, but I’ve dealt a little with satellite systems. More than a few satellites use a variant of the PowerPC 750, for example – basically the same chip that’s in a PowerBook G3.

    Say you’ve got the ability to put malicious circuitry in a PowerPC 750, and you want to use it to shut down an enemy’s spy satellite. How can you possibly have the processor tell, without intimate knowledge of the software it’ll be running and the hardware it’ll be part of, whether it’s part of a satellite or a laptop? And you have to not just recognize that it’s a satellite, but have the ability to receive a kill signal, or otherwise determine that the malicious circuitry should be used.

    If you’re having something like a radar signal processing assembly produced overseas, then sure, I can see that. But at a component level, it just doesn’t seem feasible.

    Of course, if the DoD is worrying about it, it might be because they’ve already done it to someone else. Other than the speculation about the Syrian radar mentioned in the article, the only case of that sort of sabotage I can recall dealt with some bugged printers. But those were complete systems, not generic parts.

    They’ve got some really smart people working on it, though, so they must know something I don’t. That, or it’s a lot of corporate welfare for certain contractors.

  22. Nelson.C says:

    Spork @24: It’s a bit sci-fi, but if each borked chip can communicate with every other borked chip via the normal data busses and internal network in a vehicle, say, then if only one is in a radar, radio or other communication module, if that one gets the coded kill signal it would pass it on to all the other chips in the plane. The chips wouldn’t have to do anything terribly specific to whatever machinery they’re in to muck things up, just randomly change instructions or data to glitch things or crash the system.

    For example, your whizzy new fighter plane has borked chips in the flight-control system, the GPS receivers, the engine-management system, radio, radar, and missiles. The enemy plane you’re painting with your radar picks up your scan and switches his IFF transceiver to send the ‘kill’ code, which is picked up by your radar and recognised by the borked chip in it. It passes the code on to all the other chips in the internal network, then a few cycles later starts randomly switching data, as do all the other chips. Your radar shows ghosts, you get thrust surges, your GPS insists you’re buried in a mountain when you’re over the sea, your plane starts to rattle itself apart as your fly-by-wire controls start reacting to random air currents and control innputs that aren’t really there, and if you succeed in firing a missile, it flies off at a tangent as it suffers the same problems. Oh, and you can’t send a Mayday call, because your frequency-hopping radio keeps hopping to the wrong frequencies.

    Just one of these duff chips in the plane can be coped with, but even two or three together would cause enough hassle to scrub a mission at best. That’s if you can get a signal in that way. Otherwise, it might be a matter of a saboteur sending a signal via other means, perhaps via regular diagnostic equipment.

    Shrug. I don’t know, not being very technical or a spook, but I think that’s the way I’d be thinking if I was paid to be paranoid about it.

  23. ZippySpincycle says:

    “You don’t check for the infinite possible things that are not specified,” says electrical engineering professor Ruby Lee, a cryptography expert at Princeton. “You could check the obvious possibilities, but can you test for every unspecified function?”

    So basically, there are known knowns, known unknowns, and unknown unknowns?

  24. Matthew Miller says:

    Sounds like someone at the Pentagon watched the Battlestar Galactica miniseries.

  25. hyperkine says:

    While some of these chips might have kill switches, I doubt many do and I doubt that many governments are competent to keep track of this sort of thing. To say nothing of actually implementing an attack.

  26. Takuan says:

    so, if everyone is so forward looking; Y2K?

  27. doug117 says:

    #39 is right, I admit.

    If we are purchasing chips from our enemies, then we deserve what we get.

    (Incidentally, I worked for a company that sent its chips out to be fabbed by its competitor. They never worked. Big surprise, that.)

    Even if we build em ourselves, there’s always the potential for sabotage from within…

  28. David Bilek says:

    Wasn’t this a plot point in Gerrold’s WAR AGAINST THE CHTORR series? Seems like he had it in there about 20 years ago.

  29. eti says:

    This is what we get for having most of our microchips made in China and Taiwan.

  30. hep cat says:

    I imagine that the Pentagon is worried because they remember how they destroyed the trans-Siberian gas pipeline by letting the soviets steal defective technology.
    http://query.nytimes.com/gst/fullpage.html?res=9F00E6DF173BF931A35751C0A9629C8B63

    I don’t know how you could tell the difference sabotage and run of the mill screw ups though, for instance Intel publishes a list of known bugs in their chips that will cause “correctly” written software to fail.
    ftp://download.intel.com/design/Pentium4/specupdt/31030910.pdf

    I’d be more worried about some foreign power subsidizing low cost offshore software development shops to put logic bombs in widely used commercial software.

  31. ggm says:

    also alternatively: don’t worry, the kill switch only works if you are 300ft underground, in a six-sigma secure disease laboratory, and one of your workers has undiagnosed epilepsy and you have a drunkard and a baby to work on

  32. Spork says:

    #27 Hmmm, well I was an armourer in the Airforce, not an avionics geek, but i’m pretty confident that the radar and all the other receivers on the aircraft are designed with electronic warfare in mind. It’s a huge part of air warfare.

  33. GregLondon says:

    I’ve worked as an asic designer on everything from military satelites to ink jet printers. I also did some work on Ada software for fly-by-wire avionics (software bug=>plane spirals out of control) for commercial and military aircraft.

    The short of it is they can’t do what they’re talking about doing in any reasonable way. Not through verification anyway. Verification of the *software* for just a single system (such as the fly-by-wire computer, as opposed to, say, the nav system, or radar system, or display system, or whatever) can take years already. And that is just verifying that the system will do what it’s supposed to do based on its various ranges of known inputs.

    If there is some secret input, some secret code that could somehow trigger a kill switch, that would be pretty hard to detect from a verification standpoint (toggle the inputs to various values and look for the expected output), because you don’t know what the “trigger” is.

    On the other hand, if the only thing the black hats can control is the chip manufaturing, but not the board, or the software, it will be hard for them to insert some kind of logic that will act normally during training exercises, but suddenly fail during real war.

    If the chip simply has some logic in it that says “self destruct air-to-air missile when it is launched”, then standard verification should find that because standard verification should go through all the operations of the chip and make sure it behaves as expected.

    The only way you could make something like this interesting would be to have some sort of externally based “sync” signal. One example is that Emporer Palpatine tells all the clones that it’s time for “operation 666″.

    But for that to work on something like a fighter jet, how will you radio the jet that it’s time to self destruct? Most systems are compartmentalized and self-contained specifically so that various failures (power failures, electrical failures, fire, combat damage, software bugs, and so on) do not affect the other systems. There is no point in having a redundant system if the backup depends on the original. They are designed to be as independent as possible.

    You’d not only have to secretly insert something into the ASIC, but you’d probably have to insert something into the board design, the system design, and probably the software.

    The one thing that could remotely be a feasible trigger would be time. Almost all systems have access to the current time in some form or another. You might be able to insert logic into an ASIC that would fail after the year 2032 or something weird like that.

    The fleet would be deployed and would operate as expected, but after 4 July 2032, all the aircraft would either self destruct or simply fail to fly. Then just plan your attack for 5 July 2032.

    Concievably, you could fairly easily check for this in verification by feeding different years into the system for all your tests and make sure everything works. This might be useful anyway just to make sure you don’t have some sort of new version of the Y2K bug. But you’d have to look for it to find it.

    There are a few other kill switches I can think of, but I think you can find them if you think to look for them.

    But if you’re paranoid to the point that you think there might be *anything* in your design that is a killswitch, then it becomes impossible to verify that it doesnt exist because it becomes an infinite-sized verification project.

    I think the best solution is simply to do your ASIC design and fabrication in your own country. The fab is a weakpoint, so make sure you do that in your own borders. If you’re really paranoid, make it classified and require all the employees to have security clearances. (I didn’t say this would be cheap, but at least its technically feasible)

  34. doug117 says:

    It is an interesting idea for a sci-fi book, but in real life, it won’t happen.

    (1) Even if there were a kill switch, someone’s gotta get into the system to activate it. Not generally possible. Remote diagnostic programs do not communicate with vehicles/planes/missiles in motion.

    (2) Such a [kill] scenario could exist IF the chip itself can receive and process RF or IR signals. But there’s just not a lot of those.

    (4) Having worked for a military contractor, I believe that most critical systems are developed “in-house” that is, by American contractors working for the U.S.

    (5) Those of you old enough may remember large
    computers where you sat at a terminal and typed in commands. If you didn’t type in a recognized command the terminal would display “Unrecognized Command” or similar.

    Avionic systems are like that: they just won’t respond to anything they don’t understand. There is no such thing as a “Kill Yourself” commmand and as such an enemy cannot issue one.

    [unless of course the enemy has a mole in the software department... AH another story topic.]

  35. Gilbert Wham says:

    “the only case of that sort of sabotage I can recall dealt with some bugged printers”

    My experience of printers is that it would be very hard to tell the difference between one that has been sabotaged and one that is just being normally obstreperous…

  36. Raines Cohen says:

    This sounds like the microcircuit equivalent of junk DNA.

  37. angryconsumerism says:

    I’m glad our semiconductor experts do estimates in “insane numbers.”

  38. Danny O'Brien says:

    Am I being crazy, or isn’t this like investing taxpayer dollars in the finding a solution to the halting problem?

  39. sabik says:

    @madsci, while specific hidden function would be difficult, I suspect a generic “no longer work” feature would be plausible enough. Just disable the chip and hope its function was important enough for that to neutralise the plane or satellite.

    You could try for a voltage multiplier to try to disable adjacent circuits, but I suspect that would be too obvious, too easy to find.

    Receiving the signal would be a little doubtful, but would probably work at least some of the time…

  40. JayeRandom says:

    Actually, a better place to stash malcircuitry would be in the Ethernet chipsets used in every desktop and laptop.

  41. MarlboroTestMonkey7 says:

    And suddenly the tactical screen showed, in bright green letters: “Manchurian F-35″
    And so the plane turned 180°, armed and headed base.

  42. Ben says:

    What I find is wonderful is that we’ve made it this in the comments far before somebody mentions “Skynet.”

    Oops…

  43. IWood says:

    #15-

    I don’t know how you could tell the difference sabotage and run of the mill screw ups though…

    All your fighters fall out of the sky, your ships sink, your missiles explode in their silos, and then the entire Chinese army comes pouring out of the ground using tunnels they’ve dug through the center of the earth.

    That’s how you’ll know.

Leave a Reply