Artificial intelligence brought the freaky visuals to this music video

A loud experimental band from Monaco named Hardcore Anal Hydrogen enlisted the aid of artificial intelligence to generate the trippy, psychedelic visuals used in their music video for the track, "Jean-Pierre."

On their site, the band details the ins and outs of how they manipulated Google Deep Dream, Neutral Style Transfer, and DeepFlow to get these results:

How the seems complicated but it's not. There are 2 main steps : training, then exploiting. It's the same thing than learning an object to someone, let's say a chair. You show him many pictures of differents chairs. Then after learning it, when he'll see a chair, he'll be able to answer "it's a chair", even if it's a new one he's never saw.

Turn down the volume and open your eyes and minds.

(The Awesomer) Read the rest

Researchers trick Google's AI into thinking rifles are helicopters, without any knowledge of the algorithm's design

In Partial Information Attacks on Real-world AI, a group of MIT computer science researchers report on their continuing work fooling Google's image-classifier, this time without any knowledge of how the classifier works. Read the rest

What is bias in machine learning, when does it matter, and what can we do about it?

"The Trouble with Bias," Kate Crawford's (previously) keynote at the 2017 Neural Information Processing Systems is a brilliant tour through different ways of thinking about what bias is, and when we should worry about it, specifically in the context of machine learning systems and algorithmic decision making -- the best part is at the end, where she describes what we should do about this stuff, and where to get started. (via 4 Short Links) Read the rest

Google's AI thinks this turtle is a rifle

Machine-learning-based image classifiers are vulnerable to "adversarial preturbations" where small, seemingly innocuous modifications to images (including very trivial ones) can totally confound them. Read the rest

Ten recommendations to make AI safe for humanity

A year ago, the AI Now Institute released its inaugural report on the near-future social and economic consequences of AI, drawing on input from a diverse expert panel representing a spectrum of disciplines; now they've released a followup, with ten clear recommendations for AI implementations in the public and private sector. Read the rest

Robot wisdom from a deep learning system trained on ancient proverbs

Janelle Shane trained a recurrent neural network with a data-set of more than 2000 ancient proverbs and asked it to think up its own: "A fox smells it better than a fool’s for a day." Read the rest

Pictures of dinosaurs, by a flower-drawing algorithm

Chris Rodley fed some pictures of dinosaurs to a "style transfer" machine-learning system that had been trained to draw flowers, and this was the gorgeous result. (via Kottke) Read the rest

Watch how this app uses AI to colorize vintage photos

This fancy interactive deep colorization software harnesses AI to fill in colors on a black and white photo with just a few inputs. Watch this cool demo. Read the rest

AI has a legibility problem

I first encountered the idea of technological "legibility" in the work of Natalie Jeremijenko (previously) who insists on the importance of humans being able to know why a computer is doing what it does. Read the rest

Machine learning manipulates photos to change the subject's gaze

A poster talk at last year's European Conference on Computer Vision introduced "Deep Warp," a machine-learning based technique for "photorealistic image resynthesis for gaze manipulation" -- that is, you hand the algorithm an image of a face, and tell it where you want the person in the face to look, and it moves the gaze realistically to have the person look in your desired location. Read the rest

Fiction: Sgt. Augmento, Bruce Sterling's robots-take-our-jobs story

Bruce Sterling's new short story, "Sgt. Augmento," is an eyeball-kicky post-cyberpunk story full of Sterlingian zeitgeist: in the robots-took-our-jobs future, the narrator joins the army to augment his guaranteed minimum income, but finds that overseeing robot combat isn't to his liking, so he musters out into a precarious existence clinging to the side of the sharing economy. Read the rest

Google is restructuring to put machine learning at the core of all it does

Steven Levy is in characteristic excellent form in a long piece on Medium about the internal vogue for machine learning at Google; drawing on the contacts he made with In the Plex, his must-read 2012 biography of the company, Levy paints a picture of a company that's being utterly remade around newly ascendant machine learning techniques. Read the rest

Cataloging the problems facing AI researchers is a cross between a parenting manual and a management book

Concrete Problems in AI Safety, an excellent, eminently readable paper from a group of Google AI researchers and some colleagues, sets out five hard problems facing the field: robots might damage their environments to attain their goals; robots might figure out how to cheat to attain their goals; supervising robots all the time is inefficient; robots that are allowed to try novel strategies might cause disasters; and robots that are good at one task might inappropriately try to apply that expertise to another unrelated task. Read the rest

Emojibot uses deep learning to synthesize expressive new nonverbal communications

Dango is a personal assistant that feeds its users' messages into a deep-learning neural net to discover new expressive possibilities for emojis, GIFs and stickers, and then suggests never-seen combinations of graphic elements to your text messages that add striking nuances to them. Read the rest

Deep learning AI "autoencodes" Blade Runner, recreates it so faithfully it gets a takedown notice

Artist and researcher Terence Broad is working on his master's at Goldsmith's computing department; his dissertation involved training neural networks to "autoencode" movies they've been fed. Read the rest