Invisible, targeted infrared light can fool facial recognition software into thinking anyone is anyone else

A group of Chinese computer scientists from academia and industry have published a paper documenting a tool for fooling facial recognition software by shining hat-brim-mounted infrared LEDs on the user's face, projecting CCTV-visible, human-eye-invisible shapes designed to fool the face recognition software.

The tactic lets the attacker specify which face the categorizer should "see" -- the researchers were able to trick the software into recognizing arbitrary faces as belonging to the musician Moby, the Korean politician Hoi-Chang and others.

Their experiment draws on the body of work on adversarial examples: blind spots in machine-learning models that can be systematically discovered and exploited to confuse these classifiers.

The gadget used in their attack is not readily distinguishable from a regular ball-cap, and the attack only needs a single photo of the person to be impersonated in order to set up the correct light patterns. It worked with a 70% success rate used in a "white-box" attack (where the classifier is well understood), and they believe they could migrate this to a "black-box" attack (where the classifier's workings are a secret) using a technique called "Particle Swarm Optimization."

n this paper, we discovered that infrared can be used by attackers to either dodge or impersonate someone against machine learning systems. To prove the severeness, we developed an algorithm to search adversarial examples. Besides, we designed an inconspicuous device to implement those adversarial examples in the real world. As show cases, some photos were selected from the LFW data set as hypothetical victims. We successfully worked out adversarial examples and implemented those examples for those victims. What’s more, we conducted a large scale study among the LFW data set, which showed that for a single attacker, over 70% of the people could be successfully attacked, if they have some similarity.

Based on our findings and attacks, we conclude that face recognition techniques today are still far from secure and reliable when being applied to critical scenarios like authentication and surveillance. Researchers should pay more attention to the threaten from infrared.

Invisible Mask: Practical Attacks on Face Recognition with Infrared [Zhe Zhou, Di Tang, Xiaofeng Wang, Weili Han, Xiangyu Liu and Kehuan Zhang/Arxiv]

Scientists can now fool facial recognition into thinking you're Moby [Monica Chin/Mashable]

Loading...