"Adversarial perturbations" reliably trick AIs about what kind of road-sign they're seeing

An "adversarial perturbation" is a change to a physical object that is deliberately designed to fool a machine-learning system into mistaking it for something else. Read the rest

Techniques for reliably fooling AI machine-vision classifiers

The Open AI researchers were intrigued by a claim that self-driving cars would be intrinsically hard to fool (tricking them into sudden braking maneuvers, say), because "they capture images from multiple scales, angles, perspectives, and the like." Read the rest

Netflix app will no longer run on rooted Android devices

Netflix has become one of the main forces for DRM in the world, a driver behind the W3C's dangerous, web-scale DRM project, and now they've announced that their app will no longer run on rooted/bootloader unlocked Android devices, because these devices can run code that overrides Google Widevine DRM (Widevine doesn't work well under the best of circumstances, and it harbored unpatched, showstopper bugs since its very inception). Read the rest

The "universal adversarial preturbation" undetectably alters images so AI can't recognize them

In a newly revised paper in Computer Vision and Pattern Recognition, a group of French and Swiss computer science researchers show that "a very small perturbation vector that causes natural images to be misclassified with high probability" -- that is, a minor image transformation can beat machine learning systems nearly every time. Read the rest