Google's AI thinks this turtle is a rifle

Machine-learning-based image classifiers are vulnerable to "adversarial preturbations" where small, seemingly innocuous modifications to images (including very trivial ones) can totally confound them.


This 3D-printed sculpture of a turtle has been subjected to a tiny adversarial preturbation that reliably tricks InceptionV3 image classifier into thinking that it's a rifle. The preturbation is visually imperceptible, and a version that the human eye perceives as identical is correctly classified by InceptionV3 as a turtle.

The same research team has preturbed an image of an adorable kitty-cat so that InceptionV3 always mistakes it for guacamole.

Here is a 3D-printed turtle that is classified at every viewpoint as a "rifle" by Google's InceptionV3 image classifier, whereas the unperturbed turtle is consistently classified as "turtle".

We do this using a new algorithm for reliably producing adversarial examples that cause targeted misclassification under transformations like blur, rotation, zoom, or translation, and we use it to generate both 2D printouts and 3D models that fool a standard neural network at any angle.

Our process works for arbitrary 3D models – not just turtles! We also made a baseball that classifies as an espresso at every angle! The examples still fool the neural network when we put them in front of semantically relevant backgrounds; for example, you'd never see a rifle underwater, or an espresso in a baseball mitt.

Fooling Neural Networks in the Physical World with 3D Adversarial Objects
[Labsix]