Researchers claim to have permanently neutralized ad-blocking's most promising weapons

Last year, Princeton researchers revealed a powerful new ad-blocking technique: perceptual ad-blocking uses a machine-learning model trained on images of pages with the ads identified to make predictions about which page elements are ads to block and which parts are not. Read the rest

Hate-speech detection algorithms are trivial to fool

In All You Need is “Love”: Evading Hate Speech Detection, a Finnish-Italian computer science research team describe their research on evading hate-speech detection algorithms; their work will be presented next month in Toronto at the ACM Workshop on Artificial Intelligence and Security. Read the rest

There's a literal elephant in machine learning's room

Machine learning image classifiers use context clues to help understand the contents of a room, for example, if they manage to identify a dining-room table with a high degree of confidence, that can help resolve ambiguity about other objects nearby, identifying them as chairs. Read the rest

Law professors and computer scientists mull whether America's overbroad "hacking" laws ban tricking robots

Robot law pioneer Ryan Calo (previously) teamed up with U Washington computer science and law-school colleagues to write Is Tricking a Robot Hacking? -- a University of Washington School of Law Research Paper. Read the rest

Invisible, targeted infrared light can fool facial recognition software into thinking anyone is anyone else

A group of Chinese computer scientists from academia and industry have published a paper documenting a tool for fooling facial recognition software by shining hat-brim-mounted infrared LEDs on the user's face, projecting CCTV-visible, human-eye-invisible shapes designed to fool the face recognition software. Read the rest

A proposal to stop 3D printers from making guns is a perfect parable of everything wrong with information security

Many people worry that 3D printers will usher in an epidemic of untraceable "ghost guns," particularly guns that might evade some notional future gun control regime that emerges out of the current movement to put sensible, minimal curbs on guns, particularly anti-personnel guns. Read the rest

Machine learning models keep getting spoofed by adversarial attacks and it's not clear if this can ever be fixed

Machine learning models use statistical analysis of historical data to predict future events: whether you are a good candidate for a loan, whether you will violate parole, or whether the thing in the road ahead is a stop sign or a moose. Read the rest

Google makes machine learning image classifier available to the public

Google's Cloud Automl Vision system -- a machine-learning-based image classifier -- is now available to the general public; anyone can sign up to the program, upload a set of 20-10,000 images and train a new model with them, which they can then use. Read the rest

Researchers trick Google's AI into thinking rifles are helicopters, without any knowledge of the algorithm's design

In Partial Information Attacks on Real-world AI, a group of MIT computer science researchers report on their continuing work fooling Google's image-classifier, this time without any knowledge of how the classifier works. Read the rest