What should go in an IoT safety-rating sticker?

Now that Consumer Reports is explicitly factoring privacy and security into its tech reviews, we're making some progress to calling out the terrible state of affairs that turned the strange dream of an Internet of Things into a nightmare we call the Internet of Shit.

Markets don't solve all our problems, but they can be a useful tool for shaping the behavior of lots of participants in big, complex systems — like the Chinese hardware manufacturers, the investors, the entrepreneurs, the techies, the product managers in incumbent businesses, the ad brokers, the regulators and the other players who make up the Internet of Shit ecosystem.

For markets to work at all, the participants need good information; the Market for Lemons phenomenon is well-understood and noncontroversial, and it predicts that if buyers in a market don't have a good way to evaluate what they're getting, the market will be taken over by substandard goods. That's as good a description of the Internet of Shit marketplace as any.

On Motherboard, Karl Bode asks a variety of commentators and experts what should go in a "food-safety-style" label for IoT devices to help people make better choices and discipline the companies whose products are substandard. The answers are really interesting!

I contributed a little to this piece, by suggesting that reviewers could score an easy win merely by noting whether the company behind a product is willing to promise to never, under any circumstances, retaliate against security researchers who go public with true information about defects in the product. That's a kind of graceful failure mode that ensures that there are as few barriers as possible for defects that the manufacturer and the reviewers missed to come to the attention of the devices' owners before they're exploited.

It's weird that this even needs to be said, at least in the USA with its First Amendment protections. Uttering a true fact about technology is so clearly protected speech that it's something of a dysfunctional miracle that companies have managed to make these utterances illegal through trade secrecy, anti-hacking laws, and even copyright. It's obviously true that there are irresponsible and responsible ways to make security disclosures, but the last people who should be making that call is the corporations who stand to lose money from public information about bugs in their products. I mean, obviously.

One more point that bears consideration: labeling is not enough when there aren't any good products in the marketplace. California's Prop 65 warnings are a good example: since 1986, companies have been required to post signs or affix labels that say "This facility (or product) contains chemicals that are known to the State of California to cause cancer, birth defects, or other reproductive harm." The idea was that customers would steer clear of Prop 65 labeled products and that this would incentivize companies to do better and remove them from their products.

But that's not what happened: as it turns out, a huge number of places and products now bear these warnings, so many that it's basically impossible to go about your day if you refuse to go anywhere or buy anything that bears a Prop 65 notice (not least because there is no penalty for posting a warning even if there are no dangerous substances involved, but failing to post a warning where needed carries the risk of a punishing lawsuit, so lots of businesses just post a warning "to be on the safe side").

So Californians have become entirely blind to these warnings, and any ability to use them to avoid risk has been completely destroyed.

I'm not sure what could have made this work better: maybe if companies were banned from using carcinogenic or tetarogenic chemicals in public spaces, and workplace safety rules required businesses to provide meaningful notice to employees, along with training to prevent exposure?

"Until now, reviewers have primarily focused on how smart gadgets work, but not how they fail: it's like reviewing cars but only by testing the accelerator, and not the brakes," activist and author Cory Doctorow told Motherboard.

"The problem is that it's hard to tell how a device fails," Doctorow said. "'The absence of evidence isn't the evidence of absence,' so just because you don't spot any glaring security problems, it doesn't mean there aren't any."

Countless hardware vendors field products with absolutely zero transparency into what data is being collected or transmitted. As a result, consumers can often find their smart cameras and DVRs participating in DDOS attacks, or their televisions happily hoovering up an ocean of viewing data, which is then bounced around the internet sans encryption.

Product reviews that highlight these problems at the point of sale could go a long way toward discouraging such cavalier behavior toward consumer welfare and a healthy internet, pressuring companies to at least spend a few fleeting moments pretending to care about privacy and security if they value their brand reputation.

The Internet of Things Needs Food Safety-Style Ratings for Privacy and Security [Karl Bode/Motherboard]

(Image: Cryteria, CC-BY)