Researchers think that adversarial examples could help us maintain privacy from machine learning systems

Machine learning systems are pretty good at finding hidden correlations in data and using them to infer potentially compromising information about the people who generate that data: for example, researchers fed an ML system a bunch of Google Play reviews by reviewers whose locations were explicitly given in their Google Plus reviews; based on this, the model was able to predict the locations of other Google Play reviewers with about 44% accuracy. Read the rest

Surveillance camera hallucinates face in the snow, won't shut up about it

A beauty from last February: Kyle McDonald tweeted redacted social media screenshots from a surveillance camera owner that emitted a steady stream of alerts because it saw a face in the garden -- a face that was just a random assortment of grime and snow that only vaguely resembled a face, but still triggered the facial recognition algorithm. In the end, the only way to shut up the camera was to stomp around in the snow until the "face" was erased. Read the rest

Announcement of Tumblr's sale to WordPress classified as pornography by Tumblr's notorious "adult content" filter

Tumblr is being sold to WordPress parent company Automattic for a reported price of "less than $3m," a substantial decline from the $1.1B Yahoo paid for the company in 2013 (Yahoo subsequently sold Tumblr and several other startups it had overpaid for and then ruined to Verizon for more than $4b). Read the rest

Adversarial Fashion: clothes designed to confuse license-plate readers

Adversarial Fashions have a line of clothes (jackets, tees, hoodies, dresses, skirts, etc) designed to confound automated license-plate readers; one line is tiled with fake license plates that spell out the Fourth Amendment (!); the designers presented at Defcon this year. (via JWZ) Read the rest

"Intellectual Debt": It's bad enough when AI gets its predictions wrong, but it's potentially WORSE when AI gets it right

Jonathan Zittrain (previously) is consistently a source of interesting insights that often arrive years ahead of their wider acceptance in tech, law, ethics and culture (2008's The Future of the Internet (and how to stop it) is surprisingly relevant 11 years later); in a new long essay on Medium (shorter version in the New Yorker), Zittrain examines the perils of the "intellectual debt" that we incur when we allow machine learning systems that make predictions whose rationale we don't understand, because without an underlying theory of those predictions, we can't know their limitations. Read the rest

Autonomous vehicles fooled by drones that project too-quick-for-humans road-signs

In MobilBye: Attacking ADAS with Camera Spoofing, a group of Ben Gurion security researchers describe how they were able to defeat a Renault Captur's "Level 0" autopilot (Level 0 systems advise human drivers but do not directly operate cars) by following them with drones that projected images of fake roadsigns for a 100ms instant -- too short for human perception, but long enough for the autopilot's sensors. Read the rest

Towards a method for fixing machine learning's persistent and catastrophic blind spots

An adversarial preturbation is a small, human-imperceptible change to a piece of data that flummoxes an otherwise well-behaved machine learning classifier: for example, there's a really accurate ML model that guesses which full-sized image corresponds to a small thumbnail, but if you change just one pixel in the thumbnail, the classifier stops working almost entirely. Read the rest

A 40cm-square patch that renders you invisible to person-detecting AIs

Researchers from KU Leuven have published a paper showing how they can create a 40cm x 40cm "patch" that fools a convoluted neural network classifier that is otherwise a good tool for identifying humans into thinking that a person is not a person -- something that could be used to defeat AI-based security camera systems. They theorize that the could just print the patch on a t-shirt and get the same result. Read the rest

Small stickers on the ground trick Tesla autopilot into steering into opposing traffic lane

Researchers from Tencent Keen Security Lab have published a report detailing their successful attacks on Tesla firmware, including remote control over the steering, and an adversarial example attack on the autopilot that confuses the car into driving into the oncoming traffic lane. Read the rest

Towards a general theory of "adversarial examples," the bizarre, hallucinatory motes in machine learning's all-seeing eye

For several years, I've been covering the bizarre phenomenon of "adversarial examples (AKA "adversarial preturbations"), these being often tiny changes to data than can cause machine-learning classifiers to totally misfire: imperceptible squeaks that make speech-to-text systems hallucinate phantom voices; or tiny shifts to a 3D image of a helicopter that makes image-classifiers hallucinate a rifle Read the rest

Tumblr's porn filter blocked Tumblr's images illustrating what Tumblr's porn filter won't block

Yesterday, despite the manifest, glaring problems with its porn filter, Tumblr turned on mandatory porn-blocking for all its users' content, so that anything that its bots identified a pornographic would be invisible. Read the rest

Researchers claim to have permanently neutralized ad-blocking's most promising weapons

Last year, Princeton researchers revealed a powerful new ad-blocking technique: perceptual ad-blocking uses a machine-learning model trained on images of pages with the ads identified to make predictions about which page elements are ads to block and which parts are not. Read the rest

Hate-speech detection algorithms are trivial to fool

In All You Need is “Love”: Evading Hate Speech Detection, a Finnish-Italian computer science research team describe their research on evading hate-speech detection algorithms; their work will be presented next month in Toronto at the ACM Workshop on Artificial Intelligence and Security. Read the rest

There's a literal elephant in machine learning's room

Machine learning image classifiers use context clues to help understand the contents of a room, for example, if they manage to identify a dining-room table with a high degree of confidence, that can help resolve ambiguity about other objects nearby, identifying them as chairs. Read the rest

Law professors and computer scientists mull whether America's overbroad "hacking" laws ban tricking robots

Robot law pioneer Ryan Calo (previously) teamed up with U Washington computer science and law-school colleagues to write Is Tricking a Robot Hacking? -- a University of Washington School of Law Research Paper. Read the rest

Invisible, targeted infrared light can fool facial recognition software into thinking anyone is anyone else

A group of Chinese computer scientists from academia and industry have published a paper documenting a tool for fooling facial recognition software by shining hat-brim-mounted infrared LEDs on the user's face, projecting CCTV-visible, human-eye-invisible shapes designed to fool the face recognition software. Read the rest

A proposal to stop 3D printers from making guns is a perfect parable of everything wrong with information security

Many people worry that 3D printers will usher in an epidemic of untraceable "ghost guns," particularly guns that might evade some notional future gun control regime that emerges out of the current movement to put sensible, minimal curbs on guns, particularly anti-personnel guns. Read the rest

More posts