Machine learning models keep getting spoofed by adversarial attacks and it's not clear if this can ever be fixed

Machine learning models use statistical analysis of historical data to predict future events: whether you are a good candidate for a loan, whether you will violate parole, or whether the thing in the road ahead is a stop sign or a moose. Read the rest

Adversarial patches: colorful circles that convince machine-learning vision system to ignore everything else

Machine learning systems trained for object recognition deploy a bunch of evolved shortcuts to choose which parts of an image are important to their classifiers and which ones can be safely ignored. Read the rest

Researchers trick Google's AI into thinking rifles are helicopters, without any knowledge of the algorithm's design

In Partial Information Attacks on Real-world AI, a group of MIT computer science researchers report on their continuing work fooling Google's image-classifier, this time without any knowledge of how the classifier works. Read the rest