"adversarial examples"

Researchers claim to have permanently neutralized ad-blocking's most promising weapons

Last year, Princeton researchers revealed a powerful new ad-blocking technique: perceptual ad-blocking uses a machine-learning model trained on images of pages with the ads identified to make predictions about which page elements are ads to block and which parts are not. Read the rest

There's a literal elephant in machine learning's room

Machine learning image classifiers use context clues to help understand the contents of a room, for example, if they manage to identify a dining-room table with a high degree of confidence, that can help resolve ambiguity about other objects nearby, identifying them as chairs. Read the rest

Law professors and computer scientists mull whether America's overbroad "hacking" laws ban tricking robots

Robot law pioneer Ryan Calo (previously) teamed up with U Washington computer science and law-school colleagues to write Is Tricking a Robot Hacking? -- a University of Washington School of Law Research Paper. Read the rest

Invisible, targeted infrared light can fool facial recognition software into thinking anyone is anyone else

A group of Chinese computer scientists from academia and industry have published a paper documenting a tool for fooling facial recognition software by shining hat-brim-mounted infrared LEDs on the user's face, projecting CCTV-visible, human-eye-invisible shapes designed to fool the face recognition software. Read the rest

Machine learning models keep getting spoofed by adversarial attacks and it's not clear if this can ever be fixed

Machine learning models use statistical analysis of historical data to predict future events: whether you are a good candidate for a loan, whether you will violate parole, or whether the thing in the road ahead is a stop sign or a moose. Read the rest

Adversarial examples: attack can imperceptibly alter any sound (or silence), embedding speech that only voice-assistants will hear

Adversarial examples have torn into the robustness of machine-vision systems: it turns out that changing even a single well-placed pixel can confound otherwise reliable classifiers, and with the right tricks they can be made to reliably misclassify one thing as another or fail to notice an object altogether. But even as vision systems were falling to adversarial examples, audio systems remained stubbornly hard to fool, until now. Read the rest

Researchers trick Google's AI into thinking rifles are helicopters, without any knowledge of the algorithm's design

In Partial Information Attacks on Real-world AI, a group of MIT computer science researchers report on their continuing work fooling Google's image-classifier, this time without any knowledge of how the classifier works. Read the rest

Google's AI thinks this turtle is a rifle

Machine-learning-based image classifiers are vulnerable to "adversarial preturbations" where small, seemingly innocuous modifications to images (including very trivial ones) can totally confound them. Read the rest

Techniques for reliably fooling AI machine-vision classifiers

The Open AI researchers were intrigued by a claim that self-driving cars would be intrinsically hard to fool (tricking them into sudden braking maneuvers, say), because "they capture images from multiple scales, angles, perspectives, and the like." Read the rest

Google's troll-fighting AI can be defeated by typos

Jigsaw is a "wildly ambitious" Google spin-off research unit that recently released Perspective, a machine-learning system designed to identify argumentative, belittling and meanspirited online conversation. Within days of its release, independent researchers have published a paper demonstrating a way of tricking Perspective into trusting ugly messages, just by introducing human-readable misspellings into their prose. Read the rest

Using Machine Learning to synthesize images that look NSFW but aren't

Yahoo has released a machine-learning model called open_nsfw that is designed to distinguish not-safe-for-work images from worksafe ones. By tweaking the model and combining it with places-CNN, MIT's scene-recognition model, Gabriel Goh created a bunch of machine-generated scenes that score high for both models -- things that aren't porn, but look porny. Read the rest

:)