Graphcore produced a series of striking images of computational graphs mapped to its "Intelligent Processing Unit."
The graph compiler builds up an intermediate representation of the computational graph to be scheduled and deployed across one or many IPU devices. The compiler can display this computational graph, so an application written at the level of a machine learning framework reveals an image of the computational graph which runs on the IPU.
The image below shows the graph for the full forward and backward training loop of AlexNet, generated from a TensorFlow description.
Our Poplar graph compiler has converted a description of the network into a computational graph of 18.7 million vertices and 115.8 million edges. This graph represents AlexNet as a highly-parallel execution plan for the IPU. The vertices of the graph represent computation processes and the edges represent communication between processes. The layers in the graph are labelled with the corresponding layers from the high level description of the network. The clearly visible clustering is the result of intensive communication between processes in each layer of the network, with lighter communication between layers.
Zuck That says, "Have you ever been on the Internet when you came across a checkbox that says “I’m not a robot?” In this video, I explain how those checkboxes (No CAPTCHA reCAPTCHAs) work as well as why they exist in the first place."
Read the rest
I mention CAPTCHA farms briefly, but the idea behind them is pretty straightforward. If a company wants to create an automatic computer program to buy 1,000 tickets to an event or make 1,000 email accounts, they can make a script that fills out the form one at a time, and when the program gets to a CAPTCHA, it will send a picture of it to a CAPTCHA farm where a low-wage worker will solve it and send the answer back to the computer program so that it can be used to finish filling out the form.
My friend and Cool Tools business partner Kevin Kelly spoke at TEDSummit about the rapid rise of artificial intelligence. The talk is based on his excellent bestselling book, The Inevitable.
Read the rest
"The actual path of a raindrop as it goes down the valley is unpredictable, but the general direction is inevitable," says digital visionary Kevin Kelly — and technology is much the same, driven by patterns that are surprising but inevitable. Over the next 20 years, he says, our penchant for making things smarter and smarter will have a profound impact on nearly everything we do. Kelly explores three trends in AI we need to understand in order to embrace it and steer its development. "The most popular AI product 20 years from now that everyone uses has not been invented yet," Kelly says. "That means that you're not late."
In his 1854 book, Walden, Henry David Thoreau wrote, “Men have become the tools of their tools.” Thoreau’s assertion is as valid today as it was when he made it over one hundred and sixty years ago. Whenever we shape technology, it shapes us, both as individuals and as a society. We created cars, and cars turned us into motorists, auto mechanics, and commuters.
Over the centuries we’ve populated our world with machines that help us do things we can’t or don’t want to do ourselves. Our world has become so saturated with machines that they’ve faded into the background. We hardly notice them. We are reaching a new threshold. Our machines are getting networked, and enabling new forms of human machine symbiosis. We’re entering a new era where fifty billion machines are in constant communication, automating and orchestrating the movement and interactions among individuals, organizations, and cities.
Institute for the Future (IFTF) is a non-profit think tank in Silicon Valley, that helps organizations and the public think about long term future plans to make better decisions in the present. Mark Frauenfelder, a research director at IFTF interviewed Rod Falcon, IFTF’s Director of the Technology Horizons Program, which combines a deep understanding of technology and societal forces, to identify and evaluate these discontinuities and innovations in the near future. Rod discussed Tech Horizon’s recent research into how machine automation is becoming an integrated, embedded, and ultimately invisible part of virtually every aspect of our lives. Read the rest
The Allen Institute for Artificial Intelligence (AI2), funded by billionaire Paul Allen's, is developing projects like an AI-based search engine for scientific papers and a system to extract "visual knowledge" from images and videos. According to Scientific American, another goal of AI2 is "to counter messages perpetuated by Hollywood and even other researchers that AI could menace the human race." SciAm's Larry Greenemeier interviewed AI2 CEO and computer scientist Oren Etzioni:
Read the rest
Why do so many well-respected scientists and engineers warn that AI is out to get us?
It’s hard for me to speculate about what motivates somebody like Stephen Hawking or Elon Musk to talk so extensively about AI. I’d have to guess that talking about black holes gets boring after awhile—it’s a slowly developing topic. The one thing that I would say is that when they and Bill Gates—someone I respect enormously—talk about AI turning evil or potential cataclysmic consequences, they always insert a qualifier that says “eventually” or this “could” happen. And I agree with that. If we talk about a thousand-year horizon or the indefinite future, is it possible that AI could spell out doom for the human race? Absolutely it’s possible, but I don’t think this long-term discussion should distract us from the real issues like AI and jobs and AI and weapons systems. And that qualifier about “eventually” or “conceptually” is what gets lost in translation...
How do you ensure that an AI program will behave legally and ethically?
If you’re a bank and you have a software program that’s processing loans, for example, you can’t hide behind it.
A group of some of the most powerful technology companies on the planet have formed a partnership on artificial intelligence.
My friend and Cool Tools partner Kevin Kelly was interviewed about his book, The Inevitable. In this video, he discuss what will happen when artificial intelligence is sold like electricity, as a utility.
NPR has a quiz that invites you to guess which of six poems were written by a computer program, and which were written by humans. A group of 10 judges weren't fooled, but I had trouble correctly guessing all of them. I appreciated the computer-generated poems as much as the human-written ones.
Read the rest
The dirty rusty wooden dresser drawer. A couple million people wearing drawers, Or looking through a lonely oven door, Flowers covered under marble floors.
And lying sleeping on an open bed. And I remember having started tripping, Or any angel hanging overhead, Without another cup of coffee dripping.
Surrounded by a pretty little sergeant, Another morning at an early crawl. And from the other side of my apartment, An empty room behind the inner wall.
A thousand pictures on the kitchen floor, Talked about a hundred years or more.
Researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) trained a neural network to recognize materials (e.g., metal grate, plants, concrete sidewalk) being hit with a drumstick, and synthesize sounds to accompany the actions. It did well enough to fool humans into thinking the sounds were real. From the abstract:
Objects make distinctive sounds when they are hit or scratched. These sounds reveal aspects of an object's material properties, as well as the actions that produced them. In this paper, we propose the task of predicting what sound an object makes when struck as a way of studying physical interactions within a visual scene. We present an algorithm that synthesizes sound from silent videos of people hitting and scratching objects with a drumstick. This algorithm uses a recurrent neural network to predict sound features from videos and then produces a waveform from these features with an example-based synthesis procedure. We show that the sounds predicted by our model are realistic enough to fool participants in a "real or fake" psychophysical experiment, and that they convey significant information about material properties and physical interactions.
This 'Trump Deep Nightmare' video is insane. Insanely accurate, that is. Don't watch while using psychedelic drugs, unless highly experienced.