Facial recognition isn't just bad because it invades privacy: it's because privacy invasions fuel discrimination

Bruce Schneier writes in the New York Times that banning facial recognition (as cities like San Diego, San Francisco, Oakland, Brookline and Somerville have done) is not enough: there are plenty of other ways to automatically recognize people (gait detection, high-resolution photos of hands that reveal fingerprints, voiceprints, etc), and these will all be used for the same purpose that makes facial recognition bad for our world: to sort us into different categories and treat us different based on those categories.


Some of these distinctions are easy to imagine: showing different ads on billboards based on who's looking at them, for example. Others are more sinister: targeting us for police interventions, raising the prices, or denying us entry to a place of business.


Schneier says that we need to regulate more than facial recognition, we need to regulate recognition itself — and the data-brokers whose data-sets are used to map recognition data to peoples' identities.


Regulating this system means addressing all three steps of the process. A ban on facial recognition won't make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.

Similarly, we need rules about how our data can be combined with other data, and then bought and sold without our knowledge or consent. The data broker industry is almost entirely unregulated; there's only one law — passed in Vermont in 2018 — that requires data brokers to register and explain in broad terms what kind of data they collect. The large internet surveillance companies like Facebook and Google collect dossiers on us more detailed than those of any police state of the previous century. Reasonable laws would prevent the worst of their abuses.

Finally, we need better rules about when and how it is permissible for companies to discriminate. Discrimination based on protected characteristics like race and gender is already illegal, but those rules are ineffectual against the current technologies of surveillance and control. When people can be identified and their data correlated at a speed and scale previously unseen, we need new rules.

We're Banning Facial Recognition. We're Missing the Point.
[Bruce Schneier/New York Times]

(Image: Cryteria, CC-BY, modified)