How To Train Your Robot, a free kids book by an engineer and his 10-year-old daughter

My buddy Ken Goldberg, a UC Berkeley professor of robotics, his 10-year-old daughter Blooma, and science communicator Ashley Chase wrote a delightful children's book called How to Train Your Robot! Illustrated by Dave Clegg, the story, about a fourth grade robotics club, is a fun and understandable introduction to how deep learning can help robots gain new skills in the messy, unstructured human world.

Thanks to support from the National Science Foundation and UC Berkeley's Lawrence Hall of Science, How to Train Your Robot is available as a free PDF online and student groups can request free hardcopies!

Read the rest

This robot plays the marimba and writes and sings its own songs

Shimon, the robotic maestro from Georgia Tech’s Center for Music Technology, is releasing an album and going on tour. To write lyrics, the robot employs deep learning combined with semantic knowledge and rhyme and rhythm. Shimon has also had a complete facelift giving it a much more expressive mug for singing. In IEEE Spectrum, Evan Ackerman interviewed Shimon's creators, professor Gil Weinberg and PhD student Richard Savery:

IEEE Spectrum: What makes Shimon’s music fundamentally different from music that could have been written by a human?

Richard Savery: Shimon’s musical knowledge is drawn from training on huge datasets of lyrics, around 20,000 prog rock songs and another 20,000 jazz songs. With this level of data Shimon is able to draw on far more sources of inspiration than than a human would ever be able to. At a fundamental level Shimon is able to take in huge amounts of new material very rapidly, so within a day it can change from focusing on jazz lyrics, to hip hop to prog rock, or a hybrid combination of them all.

How much human adjustment is involved in developing coherent melodies and lyrics with Shimon?

Savery: Just like working with a human collaborator, there’s many different ways Shimon can interact. Shimon can perform a range of musical tasks from composing a full song by itself or just playing a part composed by a human. For the new album we focused on human-robot collaboration so every song has some elements that were created by a human and some by Shimon.

Read the rest

Using Stylegan to age everyone in 1985's hit video "Cry"

Shardcore (previously) writes, "I took Godley & Creme's seminal 1985 video and sent it through a StyleGAN network." Read the rest

Analog computers could bring massive efficiency gains to machine learning

In The Next Generation of Deep Learning Hardware: Analog Computing *Sci-Hub mirror), a trio of IBM researchers discuss how new materials could allow them to build analog computers that vastly improved the energy/computing efficiency in training machine-learning networks. Read the rest

Google Translate's deep dream: some translation requests yield weird religious prophesies

Feed 19 repetitions of the word "dog" to Google Translate and ask it for a Maori conversion and you get this: "Doomsday Clock is three minutes at twelve. We are experiencing characters and a dramatic developments in the world, which indicate that we are increasingly approaching the end times and Jesus' return." Read the rest

Artificial intelligence brought the freaky visuals to this music video

A loud experimental band from Monaco named Hardcore Anal Hydrogen enlisted the aid of artificial intelligence to generate the trippy, psychedelic visuals used in their music video for the track, "Jean-Pierre."

On their site, the band details the ins and outs of how they manipulated Google Deep Dream, Neutral Style Transfer, and DeepFlow to get these results:

How the AI...works seems complicated but it's not. There are 2 main steps : training, then exploiting. It's the same thing than learning an object to someone, let's say a chair. You show him many pictures of differents chairs. Then after learning it, when he'll see a chair, he'll be able to answer "it's a chair", even if it's a new one he's never saw.

Turn down the volume and open your eyes and minds.

(The Awesomer) Read the rest

Researchers trick Google's AI into thinking rifles are helicopters, without any knowledge of the algorithm's design

In Partial Information Attacks on Real-world AI, a group of MIT computer science researchers report on their continuing work fooling Google's image-classifier, this time without any knowledge of how the classifier works. Read the rest

What is bias in machine learning, when does it matter, and what can we do about it?

"The Trouble with Bias," Kate Crawford's (previously) keynote at the 2017 Neural Information Processing Systems is a brilliant tour through different ways of thinking about what bias is, and when we should worry about it, specifically in the context of machine learning systems and algorithmic decision making -- the best part is at the end, where she describes what we should do about this stuff, and where to get started. (via 4 Short Links) Read the rest

Google's AI thinks this turtle is a rifle

Machine-learning-based image classifiers are vulnerable to "adversarial preturbations" where small, seemingly innocuous modifications to images (including very trivial ones) can totally confound them. Read the rest

Ten recommendations to make AI safe for humanity

A year ago, the AI Now Institute released its inaugural report on the near-future social and economic consequences of AI, drawing on input from a diverse expert panel representing a spectrum of disciplines; now they've released a followup, with ten clear recommendations for AI implementations in the public and private sector. Read the rest

Robot wisdom from a deep learning system trained on ancient proverbs

Janelle Shane trained a recurrent neural network with a data-set of more than 2000 ancient proverbs and asked it to think up its own: "A fox smells it better than a fool’s for a day." Read the rest

Pictures of dinosaurs, by a flower-drawing algorithm

Chris Rodley fed some pictures of dinosaurs to a "style transfer" machine-learning system that had been trained to draw flowers, and this was the gorgeous result. (via Kottke) Read the rest

Watch how this app uses AI to colorize vintage photos

This fancy interactive deep colorization software harnesses AI to fill in colors on a black and white photo with just a few inputs. Watch this cool demo. Read the rest

AI has a legibility problem

I first encountered the idea of technological "legibility" in the work of Natalie Jeremijenko (previously) who insists on the importance of humans being able to know why a computer is doing what it does. Read the rest

Machine learning manipulates photos to change the subject's gaze

A poster talk at last year's European Conference on Computer Vision introduced "Deep Warp," a machine-learning based technique for "photorealistic image resynthesis for gaze manipulation" -- that is, you hand the algorithm an image of a face, and tell it where you want the person in the face to look, and it moves the gaze realistically to have the person look in your desired location. Read the rest

Fiction: Sgt. Augmento, Bruce Sterling's robots-take-our-jobs story

Bruce Sterling's new short story, "Sgt. Augmento," is an eyeball-kicky post-cyberpunk story full of Sterlingian zeitgeist: in the robots-took-our-jobs future, the narrator joins the army to augment his guaranteed minimum income, but finds that overseeing robot combat isn't to his liking, so he musters out into a precarious existence clinging to the side of the sharing economy. Read the rest

Google is restructuring to put machine learning at the core of all it does

Steven Levy is in characteristic excellent form in a long piece on Medium about the internal vogue for machine learning at Google; drawing on the contacts he made with In the Plex, his must-read 2012 biography of the company, Levy paints a picture of a company that's being utterly remade around newly ascendant machine learning techniques. Read the rest

More posts