Deep learning AI "autoencodes" Blade Runner, recreates it so faithfully it gets a takedown notice

Artist and researcher Terence Broad is working on his master's at Goldsmith's computing department; his dissertation involved training neural networks to "autoencode" movies they've been fed.

"Autoencoding" is a process that reduces complex information to a small subset that the neural net believes to be most significant; in Broad's dissertation, he reduced each frame of Ridley Scott's Blade Runner to a 200 digit number, then invoked the net to reconstruct the image just using that data.

The result was nothing short of fantastic -- the neural net, working naively to identify the significant elements of each frame, without human supervision, was able to capture the most important data so well that its reconstruction of Blade Runner triggered a copyright notice from Warner Brothers when he posted it to Youtube.

Vox's Aja Romano got in touch with Warners to ask them about the copyright takedown, and they rescinded it. But as Romano notes, this kind of autoencoding raises interesting and thorny copyright conundra. It's pretty settled that letting a machine "read" the web in order to index it and draw inferences about meaning and structure from that index isn't a copyright violation.

What happens when machine-learning systems begin to do the same with audiovisual works, in order to do things that are protected under statute (for example, adding realtime scene narration for people with visual impairments), or legitimate areas of scholarly research?

Still, Broad noted to Vox that the way he used Blade Runner in his AI research doesn't exactly constitute a cut-and-dried legal case: "No one has ever made a video like this before, so I guess there is no precedent for this and no legal definition of whether these reconstructed videos are an infringement of copyright."

But whether or not his videos continue to rise above copyright claims, Broad's experiments won't just stop with Blade Runner. On Medium, where he detailed the project, he wrote that he "was astonished at how well the model performed as soon as I started training it on Blade Runner," and that he would "certainly be doing more experiments training these models on more films in future to see what they produce."

The potential for machines to accurately and easily "read" and recreate video footage opens up exciting possibilities both for artificial intelligence and video creation. Obviously there's still a long way to go before Broad's neural network generates earth-shattering video technology, but we can safely say already — we've seen things you people wouldn't believe.

Autoencoding Blade Runner [Terence Broad/Medium]

Autoencoding Video Frames, dissertation for the degree of Msci Creative Computing [Terence Broad/Academia.edu]

A guy trained a machine to "watch" Blade Runner. Then things got seriously sci-fi. [Aja Romano/Vox]