Researchers trick Google's AI into thinking rifles are helicopters, without any knowledge of the algorithm's design

In Partial Information Attacks on Real-world AI, a group of MIT computer science researchers report on their continuing work fooling Google's image-classifier, this time without any knowledge of how the classifier works. Read the rest

Google's AI thinks this turtle is a rifle

Machine-learning-based image classifiers are vulnerable to "adversarial preturbations" where small, seemingly innocuous modifications to images (including very trivial ones) can totally confound them. Read the rest

Researchers can fool machine-learning vision systems with a single, well-placed pixel

Three researchers from Kyushu University have published a paper describing a means of reliably fooling AI-based image classifiers with a single well-placed pixel. Read the rest

"Adversarial perturbations" reliably trick AIs about what kind of road-sign they're seeing

An "adversarial perturbation" is a change to a physical object that is deliberately designed to fool a machine-learning system into mistaking it for something else. Read the rest

Techniques for reliably fooling AI machine-vision classifiers

The Open AI researchers were intrigued by a claim that self-driving cars would be intrinsically hard to fool (tricking them into sudden braking maneuvers, say), because "they capture images from multiple scales, angles, perspectives, and the like." Read the rest

Netflix app will no longer run on rooted Android devices

Netflix has become one of the main forces for DRM in the world, a driver behind the W3C's dangerous, web-scale DRM project, and now they've announced that their app will no longer run on rooted/bootloader unlocked Android devices, because these devices can run code that overrides Google Widevine DRM (Widevine doesn't work well under the best of circumstances, and it harbored unpatched, showstopper bugs since its very inception). Read the rest

The "universal adversarial preturbation" undetectably alters images so AI can't recognize them

In a newly revised paper in Computer Vision and Pattern Recognition, a group of French and Swiss computer science researchers show that "a very small perturbation vector that causes natural images to be misclassified with high probability" -- that is, a minor image transformation can beat machine learning systems nearly every time. Read the rest