Deflationary Intelligence: in 2017, everything is "AI"


Ian Bogost (previously) describes the "deflationary" use of "artificial intelligence" to describe the most trivial computer science innovations and software-enabled products, from Facebook's suicide detection "AI" (a trivial word-search program that alerts humans) to the chatbots that are billed as steps away from passing a Turing test, but which are little more than glorified phone trees, and on whom 40% of humans give up after a single conversational volley.

What is "AI," then? Georgia Tech artificial intelligence researcher Charles Isbell says it's "Making computers act like they do in the movies."


Isbell suggests two features necessary before a system deserves the name AI. First, it must learn over time in response to changes in its environment. Fictional robots and cyborgs do this invisibly, by the magic of narrative abstraction. But even a simple machine-learning system like Netflix's dynamic optimizer, which attempts to improve the quality of compressed video, takes data gathered initially from human viewers and uses it to train an algorithm to make future choices about video transmission.

Isbell's second feature of true AI: what it learns to do should be interesting enough that it takes humans some effort to learn. It's a distinction that separates artificial intelligence from mere computational automation. A robot that replaces human workers to assemble automobiles isn't an artificial intelligence, so much as machine programmed to automate repetitive work. For Isbell, "true" AI requires that the computer program or machine exhibit self-governance, surprise, and novelty.

'Artificial Intelligence' Has Become Meaningless
[Ian Bogost/Atlantic]