Rod McCullom at Undark has a terrific overview of the perpetual "virtual lineup," where half of all American adults "are enrolled in unregulated facial recognition networks used by state and local law enforcement agencies."
Much of his article centers on research being done by Georgetown's Center on Privacy & Technology:
Studies of racial bias in face recognition algorithms are few and far between. The research that has been done, however, suggests that these systems do, in fact, show signs of bias. The most prominent study, co-authored by an FBI expert, found that several leading algorithms performed worse on African Americans, women, and young adults than on Caucasians, men, and older people, respectively. In interviews, we were surprised to find that two major face recognition companies did not test their algorithms for racial bias.
The algorithmic bias has been described as "the coded gaze" by Joy Buolamwini, an MIT Media Lab graduate student, in a nod to the literary and sociological term "the white gaze," which describes seeing the world from the perspective of a white person and assuming, always, that your audience is white.
Buolamwini, a Fulbright Fellow and two-time recipient of NASA's Astronaut Scholarship, launched the Algorithmic Justice League last year to highlight biases in machine learning. "Why isn't my face being detected? We have to look at how we give machines sight," she said in a TED Talk late last year. "Computer vision uses machine-learning techniques to do facial recognition. You create a training set with examples of faces. However, if the training sets aren't really that diverse, any face that deviates too much from the established norm will be harder to detect."
Here's her talk on algorithmic bias:
• Facial Recognition Technology Is Both Biased and Understudied (Undark via TED)