A deep dive into the technical feasibility of Bloomberg's controversial "Chinese backdoored servers" story

Last October, Bloomberg published what seemed to be the tech story of the year: a claim that Supermicro, the leading supplier of servers to clients from the Pentagon and Congress to Amazon, Apple and NASA, had been targeted by Chinese spies who'd inserted devastating, virtually undetectable hardware backdoors into their motherboards by subverting a small subcontractor in China.


But the story didn't quite add up. After it was published, the tech giants implicated in it released detailed, unequivocal denials, themselves almost without precedent — Big Tech's PR strategy during this kind of scandal is usually limited to terse denials that do not delve into detail. Instead, companies named in the story went into lavish detail explaining why it wasn't true, and couldn't be true.

These denials also don't add up: Bloomberg says it sourced its story from multiple (anonymous) sources who had direct knowledge of the incidents and who had been employed in the named organizations while they were unfolding. Bloomberg stood by its reporting, and implied that the idea that all these sources from different organizations would collude to pull off a hoax like this.

Faced with the seemingly impossible task of sorting truth from hoax in the presence of contradictory statements from Big Tech and Bloomberg, technical experts began trying to evaluate whether the hacks attributed to the Chinese spy agencies were even possible: at first, these analyses were cautiously skeptical, but then they grew more unequivocal.

Last month, Trammell Hudson — who has developed well-regarded proof-of-concept firmware attacks — gave a detailed talk giving his take on the story at the Chaos Communications Congress in Leipzig.

Though Hudson points out several possible weaknesses in the Bloomberg story, he mostly comes down on the side that it was at least possible.

More importantly, he describes the structural challenges in preventing this kind of attack: what we think of as a "computer" is actually a network of often very capable computers, each with their own firmware, and most often, that firmware takes the form of an unauditble, proprietary blob of closed-source code. While this has been on the security community's radar since at least the advent of BadUSB attacks, the power of the embedded systems in our computers has only increased, as has their opacity.


Without open access to both schematics and source, it's virtually impossible for external experts to audit the security choices made by vendors and decide whether to trust them (to say nothing of the legal risks of publishing vulnerability reports, which often gives rise to threats against security researchers who dare to say that the emperor has no clothes).


This is an excellent analysis, even if it leaves me no closer to understanding whether the underlying Bloomberg story is true.

Beyond the mystery of whether the Bloomberg report is true, there's the other mystery: if it is, why is Big Tech risking the reputational hit of fielding detailed rebuttals that will completely demolish their credibility when the truth comes out (the news that Big Tech can't be trusted in detailed technical statements would, in some ways, make the Snowden revelations look like small potatoes when it comes to trusting them in future). And if it's not, how the fuck did Bloomberg get hoaxed.


I've heard so many theories about this, each more bizarre than the last (one trusted spook-adjacent friend of long acquaintance said that they'd heard that the Trump administration planted the story to find a leaker and it got away from them!). I can't even imagine an explanation that fits all the facts we do know.

Contrary to Supermicro CEO's assertion that their designs are more secure because of their secrecy, I believe that openness will make our systems more secure. Servers from the Open Compute Project include full schematics, bill-of-materials, gerber files for the boards, etc. All of which motivated customers can use to validate that their hardware matches what is intended and that nothing has been added.

Open source CPUs like RISC-V make it even more likely that we can have some trust in our systems, especially for things like the trusted execution environments. There should be no secrets in the setup and configuration of the TEE and we should be able to inspect the implementation for sidechannels or other leaks.

Open Hardware also requires Open Firmware to be trustable. Closed source binary blobs in our firmware makes it impossible to trust what is going on in the early stages of system initialization and also hamper efforts to detect attacks. Unless we know what is supposed to be running in the BMC or early host firmware and have a reproducible way to built it ourselves, we have no way to know what has been installed the OEM or by an attacker. The LinuxBoot project, which I co-lead with Ron Minnich of Google is a way to replace much of the proprietary host firmware with Linux and its more trusted device drivers (I gave a LinuxBoot talk at 34c3).

Modchips [Trammell Hudson]

(via Four Short Links)