Distinguished scientist on the mistakes pundits make when they predict the future of AI

Rodney Brooks — eminent computer scientist and roboticist who has served as head of MIT's Computer Science and Artificial Intelligence Laboratory and CTO of Irobot — has written a scorching, provocative list of the seven most common errors made (or cards palmed) by pundits and other fortune-tellers when they predict the future of AI.

His first insight is that AI is subject to the Gartner Hype Cycle (AKA Amara's Law: "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run"), which means that a lot of what AI is supposed to be doing in the next couple years (like taking over half of all jobs in 10-20 years) is totally overblown, while the long-term consequences will likely be so profound that the effects on labor markets will be small potatoes.

Next is the unexplained leap from today's specialized, "weak" AIs that do things like recognize faces, to "strong" general AI that can handle the kind of cognitive work that humans are very good at and machines still totally suck at. It's not impossible that we'll make that leap, but anyone predicting it who can't explain where it will come from is just making stuff up.

Brooks draws a fascinating distinction between "performance" and "competence." The former is doing a thing well, the latter is being able to redirect the expertise on display during the performance into related tasks. In humans, performance and competence are closely linked — someone who's good at identifying a photo of people playing frisbee in the park knows a lot of things about weather and people and games. A computer does not — but when it performs at a task like image identification, we impute knowledge of broader context to it.

The word "learning" (as in "machine learning") is the source of a lot of mischief. It's what AI pioneer Marvin Minsky called a "suitcase" word with a lot of meaning inside it that can be selectively unpacked. Machine learning is brittle compared to the kind of learning that humans are used to. Someone good at playing chess will likely also excel at playing under variant chess rules, while a computer that's really good at playing chess is apt to founder if the rules are changed. We tend to apply our intuition about learning's robustness to machine learning and therefore overestimate how many things a computer might understand on the basis of its excellence in one narrow task.

Thinking about exponential growth is also a pitfall for AI predictions, and not in the usual way. The usual problem of exponentialism is that we struggle to get our heads around just how quickly exponential growth ramps up (resulting in bankruptcy for untold numbers of emperors). But it's impossible to state, a priori, whether you are standing on the upslope of an exponential curve that's never going to stop rising, or merely on the rise of an S-curve that's about to level off — for example, mobile phone storage grew exponentially until there was enough storage on a phone to store the median user's music, photos and assorted media, then leveled off to a rather gentle rise. Maybe the investment boom that's driving so much AI work will attain the major gains that can be made without massive efforts of decades and AI will sink into another 40 or 50 years of stagnation, as has been its usual pattern.

Brooks isn't fond of "Hollywood scenarios" which I might call insufficiently weird thinking: the idea that we'll just get a big-bang AI takeover that will leave us flatfooted, as opposed to a series of disruptions that change lots of parts of our lives in sequence, affording us the opportunity to hone our coping skills. (The counterexample is climate change, in which most of the stuff happens out of sight, and is thus out of mind until it breaches into the public consciousness because it is now everywhere and causing catastrophic problems!).

Finally, Brooks reminds us that hardware is different from software. The roads are full of old cars, kitchens are full of old fridges, and the web is full of old browsers that are all that will run on old computers. The software upgrade cycle has given internet technologists a foreshortened view of the pace of change, so we assume that within a few years of the first functional self-driving cars, there will be nothing but self-driving cars on the road.

You can see this kind of thinking at work in policy discussions. Last week Radio Lab re-ran and updated its episode on the trolley problem, and one of the hosts said that of course we'd have to make sure that all the self-driving cars are following the same software rules, otherwise, how will the cars know how to anticipate the actions of the cars around them? The obvious answer is that since self-driving cars are going to have to share the road with conventional, human-piloted cars for the entire duty-cycle of the last non-self-driving car sold before the switchover (call it 25 years), they'd better be able to cope with other vehicles whose control systems aren't deterministically programmed to follow a known set of rules!

Brooks's list is indispensable reading for AI skeptics and believers alike — a toolkit for sorting hype from hope from hucksterism.


I recently saw a story in ­MarketWatch that said robots will take half of today's jobs in 10 to 20 years. It even had a graphic to prove the numbers.

The claims are ludicrous. (I try to maintain professional language, but sometimes …) For instance, the story appears to say that we will go from one million grounds and maintenance workers in the U.S. to only 50,000 in 10 to 20 years, because robots will take over those jobs. How many robots are currently operational in those jobs? Zero. How many realistic demonstrations have there been of robots working in this arena? Zero. Similar stories apply to all the other categories where it is suggested that we will see the end of more than 90 percent of jobs that currently require physical presence at some particular site.

Mistaken predictions lead to fears of things that are not going to happen, whether it's the wide-scale destruction of jobs, the Singularity, or the advent of AI that has values different from ours and might try to destroy us. We need to push back on these mistakes. But why are people making them? I see seven common reasons.


The Seven Deadly Sins of AI Predictions

[Rodney Brooks/MIT Technology Review]


(via Dan Hon)


(Image: Cryteria, CC-BY)