See in the Dark: a machine learning technique for producing astoundingly sharp photos in very low light

A group of scientists from Intel and the University of Illinois at Urbana–Champaign have published a paper called Learning to See in the Dark detailing a powerful machine-learning based image processing technique that allows regular cameras to take super-sharp pictures in very low light, without long exposures or the kinds of graininess associated with low-light photography. Read the rest

AI finds solutions its creators didn't anticipate

Austrian researcher Károly Zsolnai-Fehér from Two Minute Papers presents four examples of AI finding loopholes that programmers had not anticipated. Read the rest

Google's talking AI is indistinguishable from humans

Tacotron 2 is Google's new text-to-speech system, and as heard in the samples below, it sounds indistinguishable from humans.

From Quartz:

The system is Google’s second official generation of the technology, which consists of two deep neural networks. The first network translates the text into a spectrogram (pdf), a visual way to represent audio frequencies over time. That spectrogram is then fed into WaveNet, a system from Alphabet’s AI research lab DeepMind, which reads the chart and generates the corresponding audio elements accordingly.

Tacotron 2 or Human? In the following examples, one is generated by Tacotron 2, and one is the recording of a human, but which is which? “That girl did a video about Star Wars lipstick.” 1 2 “She earned a doctorate in sociology at Columbia University.” 1 2 “George Washington was the first President of the United States.” 1 2 “I'm too busy for romance.” 1 2

Soundwave image by T-flex/Shutterstock. Read the rest

Rodney Brooks on the present and future of robotics & AI

Rodney Brooks is the father of the Roomba, the founder of iRobot, and the creator of both the Baxter and Sawyer product lines from Rethink Robotics. He’s arguably the world’s most accomplished roboticist. And if he’s not – and I personally can’t think of who could challenge him for that crown – he’s definitely the top roboticist to be profiled in an Errol Morris documentary (1997’s Fast, Cheap, and Out of Control).

When Rodney left Australia for the region that would later become known as Silicon Valley, there were quite literally 3 mobile robots of consequence on the entire planet. Years later, he founded a company which has since brought tens of millions of these critters into the world. His products have saved countless lives. They have also liberated thousands of acres of carpeting from dust crumbs, dog hair, and other detritus.

Amazingly, Rodney’s tenure and credentials are every bit as significant in a second high tech field: artificial intelligence. He founded the leading developer of AI coding tools throughout the 80s and early 90s. And somehow he squeezed his robotics and AI entrepreneurship in while building a storied academic career – largely at MIT, where he spent a decade running one of the two largest and most prominent AI centers in the world.

Rodney is my guest in this week's edition of the After on Podcast. You can listen to our interview by searching “After On” in your favorite podcast app, or by clicking right here:

As you’ll hear, Rodney diverges from fashionable narratives on several tech-related topics. Read the rest

Artificial intelligence brought the freaky visuals to this music video

A loud experimental band from Monaco named Hardcore Anal Hydrogen enlisted the aid of artificial intelligence to generate the trippy, psychedelic visuals used in their music video for the track, "Jean-Pierre."

On their site, the band details the ins and outs of how they manipulated Google Deep Dream, Neutral Style Transfer, and DeepFlow to get these results:

How the AI...works seems complicated but it's not. There are 2 main steps : training, then exploiting. It's the same thing than learning an object to someone, let's say a chair. You show him many pictures of differents chairs. Then after learning it, when he'll see a chair, he'll be able to answer "it's a chair", even if it's a new one he's never saw.

Turn down the volume and open your eyes and minds.

(The Awesomer) Read the rest

Watch this nuanced analysis of sci-fi film Ex Machina

Film critic Allison de Fren examines some of the complex issues raised in the 2015 Ex Machina. Her voiceover alone makes it worth it, but the insights about how women are depicted helped me get over some of my discomfort with the film's themes. Read the rest

Enjoy a relaxing AI lullaby written by our future robot overlords

If you teach an artificial intelligence a bunch of lullabies, will the resulting AI-generated lullaby make you dream of electric sheep? Read the rest

AI is taking photo enhancement to new extremes

Single image super-resolution (SISR) is an emerging technology that uses automated texture synthesis to enhance dithered and blurry photos to nearly pristine resolution. This example from EnhanceNet-PAT shows one type. There's even a free website called Let's Enhance where you can up-res your own images. Read the rest

Hypnotic video of imaginary celebrities generated by a neural net

A generative adversarial network (GAN) combines two neural networks engaged in a zero-sum competition. The result is a form of unsupervised machine learning that can produce imaginary celebrities like the ones shown in this one-hour video. Read the rest

Study measuring IQ of various AI puts Google's at 47.28

Google's AI scored more than twice as high as Apple's Siri in a comparative analysis designed to assess AI threat. Read the rest

What are the real risks we humans could face from a rogue AI superintelligence?

To hear a wide-ranging interview about the real-world risks we humans could face from a rogue superintelligence, hit play, below. My guest is author and documentary filmmaker James Barrat. Barrat’s 2014 book Our Final Invention was the gateway drug that ushered me into the narcotic realm of contemplating super AI risk. So it’s on first-hand authority that I urge you to jump in – the water’s great!

This is the seventh episode of my podcast series (co-hosted by Tom Merritt), which launched here on Boing Boing last month. The series goes deep into the science, tech, and sociological issues explored in my novel After On – but no familiarity with the novel is necessary to listen to it.

The danger of artificial consciousness has a noble pedigree in science fiction. In most minds, its wellspring is 2001: A Space Odyssey, which features HAL 9000 – an onboard computer that decides to kill off its passengers before they can disconnect it (spoiler: HAL’s rookie season ends – rather abruptly – with a 1-1 record).

James’s interest in this subject was piqued when he interviewed 2001’s author, Arthur C. Clarke, back in the pertinent year of 2001. Clarke’s concerns about superintelligence went beyond the confines of fiction. And he expressed them cogently enough to freak James out to this day.

Among James’s worries is that Hollywood has inoculated many of us from taking super AIs seriously by depicting them so preposterously. “Imagine if the Centers for Disease Control issued a serious warning about vampires,” he notes. Read the rest

Facebook AI has learned how to read your face and prompted a bot to respond appropriately

Facebook has taken a step closer to the border between between human and AI interactions.

According to New Scientist, the social network’s AI lab directed a bot to watch hundreds of Skype conversations from Youtube to analyze subtle facial expressions—then try to respond to them.

The bot is said to be able to “tilt its head” or “open its mouth” while viewing video of a human laughing.

A volunteer panel judged the bot and deemed it “qualitatively realistic,” presumably while their mouths hung open.

Read the rest

This AI trained on a million logos will now rate your logo

If an artificial intelligence reviewed your favorite logo, how would that logo fare? now you can find out with Logo Rank, a nifty tool by the guy behind Brandmark. Read the rest

Artificial intelligence identifies plant species by looking at them

Machine learning algorithms have successfully identified plant species in massive herbaria just by looking at the dried specimens. According to researchers, similar AI approaches could also be used identify the likes of fly larvae and plant fossils. From Nature:

There are roughly 3,000 herbaria in the world, hosting an estimated 350 million specimens — only a fraction of which has been digitized. But the swelling data sets, along with advances in computing techniques, enticed computer scientist Erick Mata-Montero of the Costa Rica Institute of Technology in Cartago and botanist Pierre Bonnet of the French Agricultural Research Centre for International Development in Montpellier, to see what they could make of the data.

Researchers trained... algorithms on more than 260,000 scans of herbarium sheets, encompassing more than 1,000 species. The computer program eventually identified species with nearly 80% accuracy: the correct answer was within the algorithms’ top 5 picks 90% of the time. That, says (Penn State paleobotanist Peter) Wilf, probably out-performs a human taxonomist by quite a bit.

Such results often worry botanists, Bonnet says, many of whom already feel that their field is undervalued. “People feel this kind of technology could be something that will decrease the value of botanical expertise,” he says. “But this approach is only possible because it is based on the human expertise. It will never remove the human expertise.” People would also still need to verify the results, he adds.

"Going deeper in the automated identification of Herbarium specimens" (BMC Evolutionary Biology) Read the rest

Guy merges audio from the Blade Runner 2049 trailer to the new Google Assistant ad

Spencer Chen, VP of marketing and business development at Alibaba Group, added the audio from the Blade Runner 2049 trailer to the ad for the new Google Assistant. "I'm scared," he tweeted. "Literally no extra editing involved."

Originals:

Read the rest

Scientists ponder the possibility of quantum consciousness

As AI improves, the mystery of consciousness interests more programmers and physicists. Read the rest

A neural network generated these can't-fail pickup lines

Neural nets are starting to wake up. These pickup lines, generated by a neural net maintained by research scientist Janelle Shane are much more interesting than standard pickup lines.

Are you a 4loce? Because you’re so hot!

I want to get my heart with you.

You are so beautiful that you know what I mean.

I have a cenver? Because I just stowe must your worms.

Hey baby, I’m swirked to gave ever to say it for drive.

If I were to ask you out?

You must be a tringle? Cause you’re the only thing here.

I’m not on your wears, but I want to see your start.

You are so beautiful that you make me feel better to see you.

Hey baby, you’re to be a key? Because I can bear your toot?

I don’t know you.

I have to give you a book, because you’re the only thing in your eyes.

Are you a candle? Because you’re so hot of the looks with you.

I want to see you to my heart.

If I had a rose for every time I thought of you, I have a price tighting.

I have a really falling for you.

Your beauty have a fine to me.

Are you a camera? Because I want to see the most beautiful than you.

I had a come to got your heart.

You’re so beautiful that you say a bat on me and baby.

You look like a thing and I love you.

Hello.

Read the rest

More posts