Autonomous vehicles fooled by drones that project too-quick-for-humans road-signs

In MobilBye: Attacking ADAS with Camera Spoofing, a group of Ben Gurion security researchers describe how they were able to defeat a Renault Captur's "Level 0" autopilot (Level 0 systems advise human drivers but do not directly operate cars) by following them with drones that projected images of fake roadsigns for a 100ms instant -- too short for human perception, but long enough for the autopilot's sensors. Read the rest

Towards a method for fixing machine learning's persistent and catastrophic blind spots

An adversarial preturbation is a small, human-imperceptible change to a piece of data that flummoxes an otherwise well-behaved machine learning classifier: for example, there's a really accurate ML model that guesses which full-sized image corresponds to a small thumbnail, but if you change just one pixel in the thumbnail, the classifier stops working almost entirely. Read the rest

Towards a general theory of "adversarial examples," the bizarre, hallucinatory motes in machine learning's all-seeing eye

For several years, I've been covering the bizarre phenomenon of "adversarial examples (AKA "adversarial preturbations"), these being often tiny changes to data than can cause machine-learning classifiers to totally misfire: imperceptible squeaks that make speech-to-text systems hallucinate phantom voices; or tiny shifts to a 3D image of a helicopter that makes image-classifiers hallucinate a rifle Read the rest

Machine learning models keep getting spoofed by adversarial attacks and it's not clear if this can ever be fixed

Machine learning models use statistical analysis of historical data to predict future events: whether you are a good candidate for a loan, whether you will violate parole, or whether the thing in the road ahead is a stop sign or a moose. Read the rest

Adversarial patches: colorful circles that convince machine-learning vision system to ignore everything else

Machine learning systems trained for object recognition deploy a bunch of evolved shortcuts to choose which parts of an image are important to their classifiers and which ones can be safely ignored. Read the rest

Researchers trick Google's AI into thinking rifles are helicopters, without any knowledge of the algorithm's design

In Partial Information Attacks on Real-world AI, a group of MIT computer science researchers report on their continuing work fooling Google's image-classifier, this time without any knowledge of how the classifier works. Read the rest