Two of the MIT researchers behind the provocative Deep Angel project, an algorithm that digitally erases objects from photos, have now delivered a strange and beautiful system to "conjure phantasms into being." Read the rest
This blurry portrait of a man may not look like much but it just sold at auction for $432,500, nearly 45 times its high estimate. What makes it so special? The Portrait of Edmond Belamy is the work of Artificial Intelligence and it's the first of its kind to sell at a major auction house.
Read the rest
This portrait, however, is not the product of a human mind. It was created by an artificial intelligence, an algorithm defined by that algebraic formula with its many parentheses. And when it went under the hammer in the Prints & Multiples sale at Christie’s on 23-25 October, Portrait of Edmond Belamy sold for an incredible $432,500, signalling the arrival of AI art on the world auction stage.
The painting, if that is the right term, is one of a group of portraits of the fictional Belamy family created by Obvious, a Paris-based collective consisting of Hugo Caselles-Dupré, Pierre Fautrel and Gauthier Vernier. They are engaged in exploring the interface between art and artificial intelligence, and their method goes by the acronym GAN, which stands for ‘generative adversarial network’.
‘The algorithm is composed of two parts,’ says Caselles-Dupré. ‘On one side is the Generator, on the other the Discriminator. We fed the system with a data set of 15,000 portraits painted between the 14th century to the 20th. The Generator makes a new image based on the set, then the Discriminator tries to spot the difference between a human-made image and one created by the Generator.
Chris Veltri, proprietor of San Francisco's legendary Groove Merchant record shop, posted this astounding artifact to his Instagram wunderkammer of outré culture paper ephemera @collagedropoutsf! It's a poster for a lecture by artificial intelligence pioneer Herbert Simon that took place at UC Berkeley in 1974. The speech was titled "How Man and Computers Understand Language."
Far fucking out. Read the rest
The theme of this year's Burning Man is I, Robot, which focuses "on the many forms of artificial intelligence that permeate our lives..." So, naturally, someone trained a neural network to come up with some camp names.
It spit out believable names like Spankles, Astro Sparkin, and Space Rock Screamin Camp, as well as weirder names like Corn Viral Hammers, Wiq Renames Spaghette, and Hellball Lounge. Then it went with some truly bizarre ones like Cohnie Stacefur Ass Chaos, Sir Liberains the Wreck Middle, and Awes Orpoop.
The woman behind the experiment, research scientist Janelle Shane, writes:
Thanks to an anonymous burner, I had a list of 1593 past Burning Man camps to feed to a neural network. A neural network is a kind of machine learning algorithm that learns to imitate the data it sees. My starting point was a textgen-rnn neural net that had been previously trained on metal bands and roller derby names, so it had a few ideas of its own to bring to the table. It did not disappoint.
There's a bunch more of these machine-learned camp names over at Shane's site.
Let's hope life imitates art and some Burners out there actually create one (or more) of these camps this year on the playa!
Image via simon of the playa
Thanks, Dan S.! Read the rest
In the current acquisition binge around artificial intelligence, tech behemoths with deep pockets lead the way, including Google, Apple, Facebook, Amazon, Intel, Microsoft, Twitter, and Salesforce. The only one with a limited consumer-facing presence is social monitoring firm Meltwater. Read the rest
A group of scientists from Intel and the University of Illinois at Urbana–Champaign have published a paper called Learning to See in the Dark detailing a powerful machine-learning based image processing technique that allows regular cameras to take super-sharp pictures in very low light, without long exposures or the kinds of graininess associated with low-light photography. Read the rest
Tacotron 2 is Google's new text-to-speech system, and as heard in the samples below, it sounds indistinguishable from humans.
Tacotron 2 or Human? In the following examples, one is generated by Tacotron 2, and one is the recording of a human, but which is which? “That girl did a video about Star Wars lipstick.” 1 2 “She earned a doctorate in sociology at Columbia University.” 1 2 “George Washington was the first President of the United States.” 1 2 “I'm too busy for romance.” 1 2
The system is Google’s second official generation of the technology, which consists of two deep neural networks. The first network translates the text into a spectrogram (pdf), a visual way to represent audio frequencies over time. That spectrogram is then fed into WaveNet, a system from Alphabet’s AI research lab DeepMind, which reads the chart and generates the corresponding audio elements accordingly.
Rodney Brooks is the father of the Roomba, the founder of iRobot, and the creator of both the Baxter and Sawyer product lines from Rethink Robotics. He’s arguably the world’s most accomplished roboticist. And if he’s not – and I personally can’t think of who could challenge him for that crown – he’s definitely the top roboticist to be profiled in an Errol Morris documentary (1997’s Fast, Cheap, and Out of Control).
When Rodney left Australia for the region that would later become known as Silicon Valley, there were quite literally 3 mobile robots of consequence on the entire planet. Years later, he founded a company which has since brought tens of millions of these critters into the world. His products have saved countless lives. They have also liberated thousands of acres of carpeting from dust crumbs, dog hair, and other detritus.
Amazingly, Rodney’s tenure and credentials are every bit as significant in a second high tech field: artificial intelligence. He founded the leading developer of AI coding tools throughout the 80s and early 90s. And somehow he squeezed his robotics and AI entrepreneurship in while building a storied academic career – largely at MIT, where he spent a decade running one of the two largest and most prominent AI centers in the world.
Rodney is my guest in this week's edition of the After on Podcast. You can listen to our interview by searching “After On” in your favorite podcast app, or by clicking right here:
As you’ll hear, Rodney diverges from fashionable narratives on several tech-related topics. Read the rest
A loud experimental band from Monaco named Hardcore Anal Hydrogen enlisted the aid of artificial intelligence to generate the trippy, psychedelic visuals used in their music video for the track, "Jean-Pierre."
On their site, the band details the ins and outs of how they manipulated Google Deep Dream, Neutral Style Transfer, and DeepFlow to get these results:
How the AI...works seems complicated but it's not. There are 2 main steps : training, then exploiting. It's the same thing than learning an object to someone, let's say a chair. You show him many pictures of differents chairs. Then after learning it, when he'll see a chair, he'll be able to answer "it's a chair", even if it's a new one he's never saw.
Turn down the volume and open your eyes and minds.
Single image super-resolution (SISR) is an emerging technology that uses automated texture synthesis to enhance dithered and blurry photos to nearly pristine resolution. This example from EnhanceNet-PAT shows one type. There's even a free website called Let's Enhance where you can up-res your own images. Read the rest