Volkswagen's cars didn't have a fault in their diesel motors — they were designed to lie to regulators, and that matters, because regulation is based on the idea that people lie, but things tell the truth.
The Internet of Things is a world of devices (buildings, legs, TVs, phones) that can be programmed to sense and respond to their environments. These are things that don't submit to scrutiny: they fight back. You know the old joke about a broken photocopier that works perfectly when the repair tech shows up? Xerox could build one of those and maximize service-call revenue.
As Marcelo Rinese from the Institute for Ethics and Emerging Technologies writes, technological tests have to be fair, transparent, and well-defined — which makes them easy to detect and defeat.
It's a demonological approach to science, where the universe is perverse and wants to hide its secrets from you.
Regulators are slow to notice these things, and the punishment is usually only a tax on the earnings, since charging out the real cost would kill the company, and the prospect of that triggers hand-wringing over the innocents — employees, customers, shareholders — who would be collateral damage in any kind of effective enforcement (that is, enforcement that deters other companies).
Rinese's piece is excellent, but misses out one critical factor: the prohibition on reverse-engineering of devices. Any device with even a little DRM is covered by the US DMCA and its foreign equivalents, like Europe's EUCD. These laws punish anyone who jailbreaks a DRM-locked device (cars, insulin pumps, phones, TVs, HVAC systems and thermostats), making it a felony to expose their wrongdoing.
This geometrically complexifies the intrinsic regulatory difficulties of rooting out deliberate wrongdoing in firmware design because it means that interested parties — independent researchers, consumer advocacy groups, competitors — can't serve as part of the regulatory mechanism, blowing the whistle on bad guys. It means regulators are out there all on their own, trying to police a world that is designed to trick them.
The intrinsic challenge to our legal framework is that technical standards have to be precisely defined in order to be fair, but this makes them easy to detect and defeat. They assume a mechanical universe, not one in which objects get their software updated with new lies every time regulatory bodies come up with a new test. And even if all software were always available, checking it for unwanted behavior would be unfeasible — more often than not, programs fail because the very organizations that made them haven't or couldn't make sure it behaved as they intended.
So the fact is that our experience of the world will increasingly come to reflect our experience of our computers and of the internet itself (not surprisingly, as it'll be infused with both). Just as any user feels their computer to be a fairly unpredictable device full of programs they've never installed doing unknown things to which they've never agreed to benefit companies they've never heard of, inefficiently at best and actively malignant at worst (but how would you now?), cars, street lights, and even buildings will behave in the same vaguely suspicious way. Is your self-driving car deliberately slowing down to give priority to the higher-priced models? Is your green A/C really less efficient with a thermostat from a different company, or it's just not trying as hard? And your tv is supposed to only use its camera to follow your gestural commands, but it's a bit suspicious how it always offers Disney downloads when your children are sitting in front of it.
None of those things are likely to be legal, but they are going to be profitable, and, with objects working actively to hide them from the government, not to mention from you, they'll be hard to catch.
The price of the Internet of Things will be a vague dread of a malicious world [Marcelo Rinesi/IEET]
(via JWZ)