A proposal to stop 3D printers from making guns is a perfect parable of everything wrong with information security

Many people worry that 3D printers will usher in an epidemic of untraceable "ghost guns," particularly guns that might evade some notional future gun control regime that emerges out of the current movement to put sensible, minimal curbs on guns, particularly anti-personnel guns.

Certainly, it's possible to 3D print a gun specifically designed to kill people (as opposed to target shooting or hunting). It's also possible to have such a gun machined in pieces by stores that will overnight the components to you for on-site, simple assembly, but 3D printers make that process simpler and may eventually make it cheaper.

Enter "C3PO," a notional solution to this problem proposed by a group of researchers based at U Syracuse and Buffalo U. In a preprint on Arxiv, the C3PO team proposes that 3D printers could come pre-installed with a database of hundreds of thousands of images that they would attempt to match to print-jobs to determine whether they were being asked to make a gun (or any other unlawful object) so they could reject jobs that seemed to match the prohibition list.

It's DRM for 3D printers, in other words, and it has all the problems of DRM and then some.

First, for this to work, it has to be impossible for the user to alter the configuration of the printer. To make that practical, the printer has to obfuscate its operations from the user, lock down its bootloader, and generally treat the user as an adversary (the paper explicitly describes the printer being an adversary of its owner).

Then, the manufacturer will have to invoke Section 1201 of the DMCA, as well as CFAA and other censoring rules to suppress bug-reports, because any defect in a printer could be exploited to overrule the preloaded prohibition on printing objects on the banned list.

So now you've got a printer that can run on free/open source software (because this is intrinsically user-modifiable). It has a "Ring -1" in which code executes without the ability of users to inspect or terminate processes. Any malware that runs in that zone — anything that leverages a bug like the the ones AMD is contending with right now — is, by definition, undetectable to the user and can do anything from staging attacks on the rest of the user's network to tampering with the user's printouts to introduce subtle (and since we're talking about high-performance materials, potentially lethal) flaws into them, etc.

And what's more, it won't stop 3D printed guns. The burgeoning body of research on adversarial examples reveals the inadequacy of this kind of fuzzy matching. The tldr is that if the fact that you can train your printer to recognize models of guns generated by people who weren't trying to fool it tells you nothing about whether it's possible to fool it. Think of how Google was once able to use inbound links as an incredibly reliable signal of page-relevance, and how quickly and easily attackers were able to generate spurious inbound links to fool Google's Pagerank algorithm.

What's more, the intrinsic secrecy of the DRM model means that legit security researchers who discover defects in the gun-detection system won't be able to publish (since they'll face legal retaliation under DMCA 1201, etc) while people who want to make guns will be able to freely develop and productize systems to bypass the gun-detector, because those people are already doing something illegal and have already demonstrated their indifference to the law.

Finally, these secret blacklists are an invitation to mischief and a moral hazard. The "special purpose," "narrowly constrained" blacklists of child sexual abuse imagery developed by governments in Europe, Australia and elsewhere were first stuffed with material that powerful people just wanted to block (information about online gambling or assisted suicide, for example) and then used as justification to expand national censorship regimes to block copyright infringement, then trademark infringement, then "extremist content" and so on.

Once you equip a 3D printer with a blocklist of things that they notionally can't print, everyone will want to add to that list: Erdogan and the King of Thailand will demand that satirical statuettes depicting them in caricature be banned; Disney and the copyright lobby will demand that models matching their proprietary characters and objects be banned; Ikea will want to ban third-party connectors; patent holders will want to ban third-party dinguses; the Saudi Committee for the Promotion of Virtue and the Prevention of Vice will demand a ban on depictions of Muhammad, and so on, and so on.

So this is an idea that neatly encapsulates virtually every terrible idea from the last 30 years of computing, learning none of its hard-earned lessons.

For the record, I believe in gun control and am mildly alarmed at the implications for gun control from 3D printing. But this won't solve the problem, and will make it worse, and it's precisely because gun control is an important issue that we can't surrender to the security syllogism of "Something must be done; there, I've done something."

The abuse of 3D printing technology to produce illegal weapons requires an intelligent 3D printer with early stage malicious activity detection. The 3D printer
should identify the objects to be printed, so that the manufacturing procedure
of an illegal weapon can be terminated at early stage. The lack of large-scale
dataset obstructs the development of the intelligent 3D printer equipped with
deep learning techniques. The construction of 3D printing image database in such
scale with the recognition benchmarks has not been addressed until this work.
We attempt to design two working scenarios for an intelligent 3D printer and
provides corresponding image datasets (tens of hundreds and tens of thousands
images). We also conduct quantitative performance benchmarking on ten 3D object recognition given single images and image sequences using C3PO database.
This work brings the new thought of designing an object-aware 3D printing system. The main goal is to initiate the fusion of 3D printing technology and deep
learning techniques in the computer vision domain enabling the secure use of
3D printing technology. As the 3D models are highly customized and diverse,
building a robust recognition system remains a tough task. For the future work,
C3PO will include more common 3D models especially for firearms.

C3PO: Database and Benchmark for Early-stage
Malicious Activity Detection in 3D Printing
[Zhe Li, Xiaolong Ma, Hongjia Li, Qiyuan An,
Aditya Singh Rathore, Qinru Qiu, Wenyao Xu and Yanzhi Wang/Arxiv]

(via 4 Short Links)

(Image: Cryteria, CC-BY)