Law professors and computer scientists mull whether America's overbroad "hacking" laws ban tricking robots

Robot law pioneer Ryan Calo (previously) teamed up with U Washington computer science and law-school colleagues to write Is Tricking a Robot Hacking? — a University of Washington School of Law Research Paper.

Calo and co are looking at the intersection of adversarial examples (blind spots in machine learning systems that make it trivial to trick them into miscategorizing their input, mistaking one face for another, or a stop sign for a sign telling a car to speed up, or a thinking a turtle is a rifle) and the Computer Fraud and Abuse Act, a ridiculously overbroad anti-hacking law inspired by a panic over the 1984 movie Wargames (seriously) that gives prosecutors almost unlimited authority to attack security researchers.


There is a case to be made that the CFAA could apply to each of these scenarios. The adversarial sound in the first scenario could constitute the "transmission" of a "command" to a "protected computer," i.e., the victim's phone. Assuming the revelation of the victim's location leads to physical harm, perhaps in the form of violence by the perpetrator, the damage requirement of CFAA has been satisfied. Similarly, by defacing the stop sign, the malicious competitor can be said to have caused the transmissions of "information" — from the stop sign to the car — that led to a public safety risk. In both instances, had the attacker broken into the phone or car by exploiting a security vulnerability and altered the firmware or hardware to cause the precise same harm, the CFAA would almost certainly apply.

On the other hand, a perhaps equally strong case could be made that CFAA does not apply. In neither scenario does the defendant circumvent any security protocols or violate a terms of service. The transmission of an adversarial sound seemingly does not cause damage without authorization to a protected computer. Rather, it causes damage to a person through an authorized mechanism — voice control — of a protected computer. With respect to the driverless car scenario, it feels like a stretch to say that minor changes to the visual world that a sensor may come across constitute the "transmission" of "a program, information, code, or command" on par with a denial-of-service attack. Regardless, there is again arguably no damage to the detector "without authorization" as required under Section 1030(a)(5)(A).

However a court comes to characterize the driverless car scenario, the same logic arguably applies to the shoplifter who evades facial recognition — at least for purposes of the CFAA. Like the founder who defaces the stop sign to mislead the car's detector, the shoplifter who alters her face to mislead the facial detector has arguably transmitted information purposely to trick the grocery store into misperceiving her so she can steal. Obviously there are differences. The founder causes physical harm, the shoplifter financial. The founder has no right to alter a stop sign whereas the shoplifter has a right to apply makeup to her own face. But from a CFAA perspective, the two situations feel closely analogous.

Is Tricking a Robot Hacking? [Ryan Calo, Ivan Evtimov, David O'Hair, Earlence Fernandes and Tadayoshi Kohno/SSRN]