AI's blind spot: garbage in, garbage out

Social scientist Kate Crawford (previously) and legal scholar Ryan Calo (previously) helped organize the interdisciplinary White House AI Now summits on how AI could increase inequality, erode accountability, and lead us into temptation and what to do about it.

Now, writing in Nature, the two argue that AI thinking is plagued by a "blind spot": "there are no agreed methods to assess the sustained effects of such applications on human populations." In other words, to quote Pedro Domingos from The Master Algorithm: "People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world."

The authors note that the AI industry is already taking steps to avoid the worst pitfalls of trusting too much in machine learning, like having humans step in and make exceptions when the AI gets it wrong, and embedding "values in design" to make sure that we're using AI in a way that reflects our wider goals. But they advocate for a more basic, "social-systems analysis" to containing AI's failures: "investigating how differences in communities’ access to information, wealth and basic services shape the data that AI systems train on."

In other words, garbage in, garbage out. When you train an AI on data from biased activities -- like police stop-and-frisks -- you get biased advice from the AI. This is absolutely fundamental, the major pitfall of all statistical analysis: sampling bias.

A social-systems approach would consider the social and political history of the data on which the heat maps are based. This might require consulting members of the community and weighing police data against this feedback, both positive and negative, about the neighbourhood policing. It could also mean factoring in findings by oversight committees and legal institutions. A social-systems analysis would also ask whether the risks and rewards of the system are being applied evenly — so in this case, whether the police are using similar techniques to identify which officers are likely to engage in misconduct, say, or violence.

As another example, a 2015 study9 showed that a machine-learning technique used to predict which hospital patients would develop pneumonia complications worked well in most situations. But it made one serious error: it instructed doctors to send patients with asthma home even though such people are in a high-risk category. Because the hospital automatically sent patients with asthma to intensive care, these people were rarely on the ‘required further care’ records on which the system was trained. A social-systems analysis would look at the underlying hospital guidelines, and other factors such as insurance policies, that shape patient records9.

There is a blind spot in AI research [Kate Crawford and Ryan Calo/Nature]

(Images: Scrapyard, CC-0; HAL 9000, Cryteria, CC-BY; Processor, 502designs, CC-0)

Notable Replies

  1. doop says:

    garbage in, garbage out

    To be fair to artificial intelligence, this is also true of regular intelligence.

  2. I want that framed to hang in my office.

  3. Neal Stephenson discusses something like this -- "Artificial Inanity" -- in his novel Anathem:

    “Early in the Reticulum-thousands of years ago-it became almost useless because it was cluttered with faulty, obsolete, or downright misleading information,” Sammann said.

    “Crap, you once called it,” I reminded him.

    “Yes-a technical term. So crap filtering became important. Businesses were built around it. Some of those businesses came up with a clever plan to make more money: they poisoned the well. They began to put crap on the Reticulum deliberately, forcing people to use their products to filter that crap back out. They created syndevs whose sole purpose was to spew crap into the Reticulum. But it had to be good crap.”

    “What is good crap?” Arsibalt asked in a politely incredulous tone.

    “Well, bad crap would be an unformatted document consisting of random letters. Good crap would be a beautifully typeset, well-written document that contained a hundred correct, verifiable sentences and one that was subtly false. It’s a lot harder to generate good crap. At first they had to hire humans to churn it out. They mostly did it by taking legitimate documents and inserting errors-swapping one name for another, say. But it didn’t really take off until the military got interested.”

    “As a tactic for planting misinformation in the enemy’s reticules, you mean,” Osa said. “This I know about. You are referring to the Artificial Inanity programs of the mid-First Millennium A.R.”

    “Exactly!” Sammann said. “Artificial Inanity systems of enormous sophistication and power were built for exactly the purpose Fraa Osa has mentioned. In no time at all, the praxis leaked to the commercial sector and spread to the Rampant Orphan Botnet Ecologies. Never mind. The point is that there was a sort of Dark Age on the Reticulum that lasted until my Ita forerunners were able to bring matters in hand.”

    Source:

    http://m.litread.me/read/151294/123011-124604?page=341

  4. 2 + 2 = 3 for small values of 2.

    I would hope anyone smart enough to get into an engineering program would understand tolerance and sig figs. I would hope.

Continue the discussion bbs.boingboing.net

5 more replies

Participants