Big Data should not be a faith-based initiative
Cory Doctorow summarizes the problem with the idea that sensitive personal information can be removed responsibly from big data: computer scientists are pretty sure that's impossible.
The debate is a hot one, and a lot of non-technical privacy regulators have been led on by sweet promises from the companies that they regulate about the possibility of creating booming markets in highly sensitive personal data that is somehow neutralized through a magic "de-identification" process that lets information about, say, the personal lives of cancer patients be bought and sold without compromising the patients' privacy.
The most recent example of this is a report by former Ontario privacy commissioner Ann Cavoukian and Daniel Castro from the pro-market thinktank the Information Technology and Innovation Foundation. The authors argue that the risk of "re-identification" has been grossly exaggerated and that it is indeed possible to produce meaningful, valuable datasets that are effectively "de-identified."
Princeton's Arvind Narayanan and Ed Felten have published a stinging rebuttal, pointing out the massive holes in Cavoukian and Castro's arguments -- cherry picking studies, improperly generalizing, ignoring the existence of multiple re-identification techniques, and so on.
As Narayanan and Felten demonstrate, the Cavoukian/Castro position is grounded in a lack of understanding of both computer science and security research. The "penetrate-and-patch" method they recommend -- where systems are fielded with live data, broken through challenges, and then revised -- has been hugely ineffective in both traditional information security development and in de-identification efforts. And as Narayanan and Felten point out, there is no shortage of computer science experts who could have helped them with this.
Cavoukian and Castro are rightly excited by Big Data and the new ways that scientists are discovering to make use of data collected for one purpose in the service of another. But they do not admit that the same theoretical advances that unlock new meaning in big datasets also unlock new ways of re-identifying the people whose data is collected in the set.
Re-identification is part of the Big Data revolution: among the new meanings we are learning to extract from huge corpuses of data is the identity of the people in that dataset. And since we're commodifying and sharing these huge datasets, they will still be around in ten, twenty and fifty years, when those same Big Data advancements open up new ways of re-identifying -- and harming -- their subjects.
Narayanan and Felten would like to have a "best of both worlds" solution that lets the world reap the benefits of Big Data without compromising the privacy of the subjects of the datasets. But if there is such a solution, it is to be found through rigorous technical examinations, not through hand-waving, wishful thinking, and bad stats.
The faith-based belief in de-identification is at the root of the worst privacy laws in recent memory. In the EU, the General Data Protection Regulation -- the most-lobbied regulatory effort in EU history -- decided to divide data protection into two categories: identifiable data and "de-identified" data, with practically no limits on how the latter could be bought and sold. The mirrors the existing UK approach, which allows companies to unilaterally declare that the data they hold has been "de-identified" and then treat it as a commodity. In both cases, it's a disaster, as I wrote in the Guardian last year. You can't make good technical regulations by ignoring technical experts, even if the thing those technical experts are telling you is that your cherished plans are impossible.
I recommend you read both Narayanan and Felten's paper, and Cavoukian and Castro's. But in the meantime, Narayanan has helpfully summarized the debate:
Specifically, we argue that:
There is no known effective method to anonymize location data, and no evidence that it’s meaningfully achievable.
Computing re-identification probabilities based on proof-of-concept demonstrations is silly.
Cavoukian and Castro ignore many realistic threats by focusing narrowly on a particular model of re-identification.
Cavoukian and Castro concede that de-identification is inadequate for high-dimensional data. But nowadays most interesting datasets are high-dimensional.
Penetrate-and-patch is not an option.
Computer science knowledge is relevant and highly available.
Cavoukian and Castro apply different standards to big data and re-identification techniques.
Quantification of re-identification probabilities, which permeates Cavoukian and Castro’s arguments, is a fundamentally meaningless exercise.
In the wake of this week's Motherboard scoop that the major US carriers sell customers' location data to marketing companies that sell it on to bounty hunters and other unsavory characters, Google has disclosed that they have told the carriers that supply service for its Google Fi mobile virtual network operator (MVNO) that they expect […]
Vizio exec: we'd have to charge a premium on "dumb" TVs to make up for the money we'll lose by not spying on you
At CES, the Verge's Nilay Patel interviewed Vizio CTO Bill Baxter, who told her that when it comes to the surveillance features of his company's "smart" TVs, "it’s not just about data collection. It’s about post-purchase monetization of the TV...[When it comes to 'dumb' TVs,] we’d collect a little bit more margin at retail to […]
Whistleblower: Amazon Ring stores your doorbell and home video feeds unencrypted and grants broad "unfettered" access to them
Sources "familiar with Ring's practices" have told The Intercept that the company -- a division of Amazon that makes streaming cameras designed to be mounted inside and outside your home -- stores the video feeds from its customers' homes in unencrypted format and allows staff around the world to have essentially unfettered access to these […]
For the newbie, Python can seem like the most intimidating programming language. After all, it can be used to create everything from simple apps to vast networks of web crawlers. But there are fundamental principles that underlie all the uses of this versatile platform, and you can absorb them all with the Python Master Class […]
Building a website on WordPress has always been easy. But if you really want to make your website stand out from the growing crowd, you’re going to need some help. For our money, a subscription to Storeshock WordPress Themes & Elements does the trick almost as well as having a pro designer by your side […]
These days, there isn’t much our iPhone camera can’t do – except feel like an actual phone. Despite years of steadily increasing resolution and image sensing technology, we’re still taking shots awkwardly with two hands, fumbling for the shutter button. Leave it to an avid photographer to design Shuttercase, a versatile iPhone case that solves […]