As I have often noted, medical devices have terrifyingly poor security models, even when compared to the rest of the nascent Internet of Things, where security is, at best, an afterthought (at worst, it's the enemy!).
An excellent feature by Monte Reel and Jordan Robertson in Bloomberg Business, documenting the Mayo Clinic's experiment with hiring penetration testers to examine the security of their devices. The results were predictably alarming: the devices with the power of life and death over entire buildings-full of people are really badly secured, and so prone to hacking that a KPMG survey found "81 percent of health information technology executives said the computer systems at their workplaces had been compromised by a cyber attack within the past two years."
On the basis of the pen testers' findings, the Mayo Clinic instituted a stringent set of security requirements from its vendors, but few hospitals and clinics have the bargaining power to make similar demands. What's more, vendors come up with terrible solutions to their own security problems. For example, the manufacturer of an automated drug-safe that could be trivially "jackpotted" (caused to dump all its opoids and other controlled substances) "fixed" the problem by requiring fingerprint authentication — from surgical teams who were operating in sterile environments, wearing gloves to protect themselves from infectious agents.
The FDA is remarkably uninterested in this (they seem "to literally be waiting for someone to be killed"). Doctors and administrators are prone to shooting the messengers, accusing security researchers of writing scare-stories. But pen testers and auditors keep finding hospitals that are playing host to all kinds of malware that's sneakily exfiltrating confidential patient data, and, alarmingly, installing ransomware packages with the power to lock up the whole electronic infrastructure of the hospital.
One thing the authors miss, regrettably, is the other titanic and immovable impediment to auditing and improving medical device security: copyright law. Section 1201 of the DMCA makes it a felony (punishable by five years in prison and a $500,000 fine) to disclose information that would assist in removing a digital lock. Medical device vendors routinely deploy these locks to prevent their competitors from making interoperable products. For example, an insulin pump maker might use digital locks to prevent patients from using cheaper insulin; or a pacemaker vendor could use them to prevent competitors from making their own software for organizing patient data, forcing hospitals and doctors' offices to buy an annual license to use the original vendor's software.
This year's Copyright Office proceedings on Section 1201 of the DMCA included this filing from Jay Radcliffe, who features heavily in the Bloomberg story; in which he documents the ways that DMCA has prevented him from disclosing potentially lethal vulnerabilities in commonly used medical implants (including the insulin pump his own doctor wants him to use).
Whatever commercial and technical impediments exist to securing medical devices — bad vendors, lack of negotiating power in hospitals, the intrinsic difficulty of information security — the DMCA makes it all much, much worse.
But it's a very good article, despite this important omission. Especially good is the passage in which infosec researcher Billy Rios finds himself critically ill, in a hospital bed, being kept alive by many of the insecure devices he'd been railing against:
After a few dazed days in bed, he got his bearings and assessed his situation. His bed was plugged into a network jack. The pressure bands strapped around his legs, which periodically squeezed his calves to aid circulation, were also connected to a computer. He counted 16 networked devices in his room, and eight wireless access points. The most obvious of these was the CareFusion infusion pump, a brand he hadn't looked into yet, that controlled the fluids that were pumped into his arm. "It wasn't like I was going to turn to the doctor and say, 'Don't hook me up to that infusion pump!' " Rios recalls. "I needed that thing."
He noticed that the other patient in his room, separated from him by a curtain, was connected to a Hospira pump. "I kept thinking, 'Should I tell him?' " Rios says. He opted for silence.
When he was able to drag himself out of bed, Rios wheeled his infusion pump into the bathroom, where he gave it a good once-over. "I'm looking at the wireless card, pushing the buttons on it, seeing what menus I can get to," he recalls. It only inflamed his concerns. "Whatever Wi-Fi password they're using to let the pump join the network, I could get that off the pump pretty easily."
In the hallway just outside his room, Rios found a computerized dispensary that stored medications in locked drawers. Doctors and nurses normally used coded identification badges to operate the machine. But Rios had examined the security system before, and he knew it had a built-in vulnerability: a hard-coded password that would allow him to "jackpot" every drawer in the cabinet. Such generic passwords are common in many medical devices, installed to allow service technicians to access their systems, and many of them cannot be changed. Rios and a partner had already alerted Homeland Security about those password vulnerabilities, and the agency had issued notices to vendors informing them of his findings. But nothing, at least at this hospital, had been done. In the hallway, he quickly discovered that all the medications in the device's drawers could have been his for the taking. "They hadn't patched it at this point, so I was testing some passwords on it, and I was like, 'This s— works!' "
It's Way Too Easy to Hack the Hospital
[Monte Reel and Jordan Robertson/Bloomberg Business]