At this week's London Design Festival, design firm Uniform displayed Solo Radio. Stand in front of the device and it scans your face for input into software that assesses your emotions. Then it plays a song via Spotify algorithms with the appropriate mood. Read the rest
A group of some of the most powerful technology companies on the planet have formed a partnership on artificial intelligence.
Read the rest
Human biases exposed by Implicit Association Tests can be replicated in machine learning using GloVe word embedding, according to a new study where GloVe was trained on "a corpus of text from the Web." Read the rest
My friend and Cool Tools partner Kevin Kelly was interviewed about his book, The Inevitable. In this video, he discuss what will happen when artificial intelligence is sold like electricity, as a utility.
Previously: In the future you will own nothing and have access to everything Read the rest
NPR has a quiz that invites you to guess which of six poems were written by a computer program, and which were written by humans. A group of 10 judges weren't fooled, but I had trouble correctly guessing all of them. I appreciated the computer-generated poems as much as the human-written ones.
Read the rest
The dirty rusty wooden dresser drawer.
A couple million people wearing drawers,
Or looking through a lonely oven door,
Flowers covered under marble floors.
And lying sleeping on an open bed.
And I remember having started tripping,
Or any angel hanging overhead,
Without another cup of coffee dripping.
Surrounded by a pretty little sergeant,
Another morning at an early crawl.
And from the other side of my apartment,
An empty room behind the inner wall.
A thousand pictures on the kitchen floor,
Talked about a hundred years or more.
Researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) trained a neural network to recognize materials (e.g., metal grate, plants, concrete sidewalk) being hit with a drumstick, and synthesize sounds to accompany the actions. It did well enough to fool humans into thinking the sounds were real.
From the abstract:
Objects make distinctive sounds when they are hit or scratched. These sounds reveal aspects of an object's material properties, as well as the actions that produced them. In this paper, we propose the task of predicting what sound an object makes when struck as a way of studying physical interactions within a visual scene. We present an algorithm that synthesizes sound from silent videos of people hitting and scratching objects with a drumstick. This algorithm uses a recurrent neural network to predict sound features from videos and then produces a waveform from these features with an example-based synthesis procedure. We show that the sounds predicted by our model are realistic enough to fool participants in a "real or fake" psychophysical experiment, and that they convey significant information about material properties and physical interactions.
[via] Read the rest
Meet Saqib, a Microsoft dev in London who lost the use of his eyes at age 7. Here's a neat little profile of his artificial intelligence development work from Microsoft Cognitive Services:
Read the rest
This 'Trump Deep Nightmare' video is insane. Insanely accurate, that is. Don't watch while using psychedelic drugs, unless highly experienced.
Read the rest
“Learning from Tay’s introduction,” a blog post dated March 25, 2016 by Microsoft Research Corporate VP Peter Lee:
Read the rest
MIT professor Marvin Minsky, a "founding father" of the field of artificial intelligence whose work opened up new vistas in computer science, cognitive psychology, philosophy, robotics, and optics, has died of a brain hemorrhage. He was 88.
In 1959, Minsky co-founded MIT's Artificial Intelligence Laboratory (now the Computer Science and Artificial Intelligence Laboratory) and dedicated his career to exploring how we might replicate the functions of the human brain in a machine, a research journey he hoped would help us better understand our own minds.
"No computer has ever been designed that is ever aware of what it's doing," Minsky once said. "But most of the time, we aren't either."
(New York Times)
Read the rest
What if we could automate the writing of clickbait headlines, thus freeing up clickbait writers to do useful work? That's the question Lars Eidnes wanted to answer when he programmed a recurrent neural network to generate "formulaic and unoriginal" headlines like these:
Top Yoga Songs For Halloween
How To Make A Classic Cold Cheese Cake
Are You Living Without A 5,000-Year-Old Style?
Jimmy Kimmel And David Beckham Play A Girl At The San Francisco Comic Con
Eidnes trained the network by feeding it two million headlines scraped from Buzzfeed, Gawker, Jezebel, Huffington Post and Upworthy.
How realistic can we expect the output of this model to be? Even if it can learn to generate text with correct syntax and grammar, it surely can’t produce headlines that contain any new knowledge of the real world? It can’t do reporting? This may be true, but it’s not clear that clickbait needs to have any relation to the real world in order to be successful. When this work was begun, the top story on BuzzFeed was “50 Disney Channel Original Movies, Ranked By Feminism." More recently they published “22 Faces Everyone Who Has Pooped Will Immediately Recognized." It’s not clear that these headlines are much more than a semi-random concatenation of topics their userbase likes, and as seen in the latter case, 100% correct grammar is not a requirement.
After training the neural network, Eidnes concludes, "It surprised me how good these headlines turned out. Most of them are grammatically correct, and a lot of them even make sense."
Take a look at the results on his site, Click-o-Tron, "possibly the first website in the world where all articles are written in their entirety by a Recurrent Neural Network. Read the rest
The US military's Defense Advanced Research Projects Agency is funding a new project to develop musical robots that can improvise a solo when playing with human jazz musicians. A collaboration between new media researchers at the University of Illinois at Urbana-Champaign and musicians at the University of Arizona, the goal of the MUSICA (Musical Improvising Collaborative Agent) project is to explore non-traditional "languages" for people and computers to interact. From Scientific American:
Read the rest
"There is definitely a desire for more natural kinds of communications with computational systems as they grow in their ability to be intelligent," Ben Grosser, an assistant professor of new media at the University of Illinois at Urbana-Champaign, told Live Science. "A lot of us are familiar with various methods of interacting with computers, such as text-based and touch-based interfaces, but language-based interfaces such as Siri or Google Now are extremely limited in their capabilities...."
To develop a machine capable of playing improvisational jazz, the researchers will create a database of jazz solos from a variety of musicians and have computers analyze the recordings to figure out the various processes that come into play when a musician improvises. The researchers will then develop a performance system to analyze the components of human jazz performances, including the beat, pitch, harmony and rhythm. The system will also consider what it has learned about jazz solos to communicate and respond musically in real time....
"Let's face it—trying to develop a system that can play jazz is a crazy idea," Grosser said.
A front-runner for People’s Choice Award at this year's AI Video Competition is Mario Lives! The University of Tübingen project is working to make video game character Mario self-aware. He's not there yet, but they are taking tiny steps. Read the rest
] My friend and Cool Tools
partner Kevin Kelly is hosting a roundtable discussion tomorrow about artificial intelligence via Google Hangouts on Air. Here are the details
. Read the rest
What will the future of artificial intelligence actually look like? We're getting some clues already from projects like Hiroshi Ishiguro's Geminoid series with its incredibly realistic bodies, writes my friend Dennis Cass at io9. But we're also seeing hints of what real-life androids won't be like.
In a post last week, Cass talks about some common fictional tropes that have shaped our expectations of androids, but probably won't be present in the real thing.
Read the rest
The android that finds humanity to be a deep, abiding mystery
We flatter ourselves: A machine could never understand jokes. Then IBM's Watson uses natural language processing to understand the punning intent behind Jeopardy! questions, and we're proven wrong. If anything, the android will see us more clearly than we see ourselves. Jeremy Bailenson, director of Stanford's Virtual Human Interaction Lab, used the Xbox Kinect to analyze body language during student-teacher interactions to "mathematically uncover subtle movement patterns, many of which would not be noticed by the human eye." Psychologist Paul Ekman has discovered the "micro-expressions" the human face flashes during a lie-if facial recognition software can read it, then the android can know it.
Ultimately, as "big data" gets bigger we'll ask ourselves what we want our androids to share. Do we charge them with stopping us from making bad life decisions? Or do they help us maintain our innocence? Fiction has its Rikers, the wise humans who preside over the sentient machine, often with a whiff of bearded condescension. Maybe the android will be the one who wears the bemused smile.