Charlie Miller made headlines in 2015 as part of the team that showed it was possible to remote-drive a Jeep Cherokee over the internet, triggering a 1.4 million vehicle recall; now, he's just quit a job at Uber where he was working on security for future self-driving taxis, and he's not optimistic about the future of this important task.
To start with, self-driving cabs will be -- by definition -- fully computerized. Other car hacks have relied on hijacking the minority of vehicle functions that were controlled by computers, but on a self-driving car, everything is up for grabs. Also: by design, there may be no manual controls (and even if there are, they'll be locked against random intervention by taxi passengers!).
It gets worse: passengers have unsupervised physical access to the car. In information security, we generally assume that if attackers can get unsupervised physical access to a device, all bets are off (this is sometimes called the evil maid attack, as one of the common threat-models is a hotel chambermaid who accesses a laptop while the owner is out of their room). Someone who wants to attack a self-driving taxi only needs to hail it -- and worse still, ports like the OBD2 can't be blocked, under penalty of federal law.
Finally, there's the combinatorial complexity of autonomous taxis supplied by third parties: for Uber to field a secure taxi, it not only has to get its own technology right, it also has to find someone to sell it a car that is likewise secure.
Miller argues that solving autonomous vehicles’ security flaws will require some fundamental changes to their security architecture. Their internet-connected computers, for instance, will need “codesigning,” a measure that ensures they only run trusted code signed with a certain cryptographic key. Today only Tesla has talked publicly about implementing that feature. Cars’ internal networks will need better internal segmentation and authentication, so that critical components don’t blindly follow commands from the OBD2 port. They need intrusion detection systems that can alert the driver—or rider—when something anomalous happens on the cars’ internal networks. (Miller and Valasek designed one such prototype.) And to prevent hackers from getting an initial, remote foothold, cars need to limit their “attack surface,” any services that might accept malicious data sent over the internet.
Complicating those fixes? Companies like Uber and Didi don’t even make the cars they use, but instead have to bolt on any added security after the fact. “They’re getting a car that already has some attack surface, some vulnerabilities, and a lot of software they don’t have any control over, and then trying to make that into something secure,” says Miller. “That’s really hard.”
That means solving autonomous vehicles’ security nightmares will require far more open conversation and cooperation among companies. That’s part of why Miller left Uber, he says: He wants the freedom to speak more openly within the industry. “I want to talk about how we’re securing cars and the scary things we see, instead of designing these things in private and hoping that we all know what we’re doing,” he says.
Securing Driverless Cars From Hackers Is Hard. Ask the Ex-Uber Guy Who Protects Them [Andy Greenberg/Wired]