Meet Saqib, a Microsoft dev in London who lost the use of his eyes at age 7. Here's a neat little profile of his artificial intelligence development work from Microsoft Cognitive Services:
This 'Trump Deep Nightmare' video is insane. Insanely accurate, that is. Don't watch while using psychedelic drugs, unless highly experienced.
MIT professor Marvin Minsky, a "founding father" of the field of artificial intelligence whose work opened up new vistas in computer science, cognitive psychology, philosophy, robotics, and optics, has died of a brain hemorrhage. He was 88.
In 1959, Minsky co-founded MIT's Artificial Intelligence Laboratory (now the Computer Science and Artificial Intelligence Laboratory) and dedicated his career to exploring how we might replicate the functions of the human brain in a machine, a research journey he hoped would help us better understand our own minds.
"No computer has ever been designed that is ever aware of what it's doing," Minsky once said. "But most of the time, we aren't either."
What if we could automate the writing of clickbait headlines, thus freeing up clickbait writers to do useful work? That's the question Lars Eidnes wanted to answer when he programmed a recurrent neural network to generate "formulaic and unoriginal" headlines like these: Top Yoga Songs For Halloween How To Make A Classic Cold Cheese Cake Are You Living Without A 5,000-Year-Old Style? Jimmy Kimmel And David Beckham Play A Girl At The San Francisco Comic Con
Eidnes trained the network by feeding it two million headlines scraped from Buzzfeed, Gawker, Jezebel, Huffington Post and Upworthy.
How realistic can we expect the output of this model to be? Even if it can learn to generate text with correct syntax and grammar, it surely can’t produce headlines that contain any new knowledge of the real world? It can’t do reporting? This may be true, but it’s not clear that clickbait needs to have any relation to the real world in order to be successful. When this work was begun, the top story on BuzzFeed was “50 Disney Channel Original Movies, Ranked By Feminism." More recently they published “22 Faces Everyone Who Has Pooped Will Immediately Recognized." It’s not clear that these headlines are much more than a semi-random concatenation of topics their userbase likes, and as seen in the latter case, 100% correct grammar is not a requirement.
After training the neural network, Eidnes concludes, "It surprised me how good these headlines turned out. Most of them are grammatically correct, and a lot of them even make sense."
The US military's Defense Advanced Research Projects Agency is funding a new project to develop musical robots that can improvise a solo when playing with human jazz musicians. A collaboration between new media researchers at the University of Illinois at Urbana-Champaign and musicians at the University of Arizona, the goal of the MUSICA (Musical Improvising Collaborative Agent) project is to explore non-traditional "languages" for people and computers to interact. From Scientific American:
Read the rest
"There is definitely a desire for more natural kinds of communications with computational systems as they grow in their ability to be intelligent," Ben Grosser, an assistant professor of new media at the University of Illinois at Urbana-Champaign, told Live Science. "A lot of us are familiar with various methods of interacting with computers, such as text-based and touch-based interfaces, but language-based interfaces such as Siri or Google Now are extremely limited in their capabilities...."
To develop a machine capable of playing improvisational jazz, the researchers will create a database of jazz solos from a variety of musicians and have computers analyze the recordings to figure out the various processes that come into play when a musician improvises. The researchers will then develop a performance system to analyze the components of human jazz performances, including the beat, pitch, harmony and rhythm. The system will also consider what it has learned about jazz solos to communicate and respond musically in real time....
"Let's face it—trying to develop a system that can play jazz is a crazy idea," Grosser said.
What will the future of artificial intelligence actually look like? We're getting some clues already from projects like Hiroshi Ishiguro's Geminoid series with its incredibly realistic bodies, writes my friend Dennis Cass at io9. But we're also seeing hints of what real-life androids won't be like.
In a post last week, Cass talks about some common fictional tropes that have shaped our expectations of androids, but probably won't be present in the real thing.
Read the rest
The android that finds humanity to be a deep, abiding mystery
We flatter ourselves: A machine could never understand jokes. Then IBM's Watson uses natural language processing to understand the punning intent behind Jeopardy! questions, and we're proven wrong. If anything, the android will see us more clearly than we see ourselves. Jeremy Bailenson, director of Stanford's Virtual Human Interaction Lab, used the Xbox Kinect to analyze body language during student-teacher interactions to "mathematically uncover subtle movement patterns, many of which would not be noticed by the human eye." Psychologist Paul Ekman has discovered the "micro-expressions" the human face flashes during a lie-if facial recognition software can read it, then the android can know it.
Ultimately, as "big data" gets bigger we'll ask ourselves what we want our androids to share. Do we charge them with stopping us from making bad life decisions? Or do they help us maintain our innocence? Fiction has its Rikers, the wise humans who preside over the sentient machine, often with a whiff of bearded condescension. Maybe the android will be the one who wears the bemused smile.