AI has a legibility problem


I first encountered the idea of technological "legibility" in the work of Natalie Jeremijenko (previously) who insists on the importance of humans being able to know why a computer is doing what it does.


In 2005, I turned this idea into a short story called "Human Readable" (it's in my collection With a Little Help), about the mischief that corrupt companies and their tame regulators could get up to by exploiting the inability to distinguish corruption from programmer error by designing self-driving cars that give traffic priority to the super-wealthy.

Writing in the MIT Technology Review, Will Knight does an excellent job of explaining how, 12 years later, machine learning techniques are being infused into many domains of human life, from criminal sentencing to credit rating, from search results to stock picks — and how the nascent science of learning why a deep learning system has made a given call is struggling to keep up.


After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study. However, Barzilay understood that the system would need to explain its reasoning. So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered. Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too. "You really need to have a loop where the machine and the human collaborate," -Barzilay says.

The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.

The Dark Secret at the Heart of AI

[Will Knight/MIT Tech Review]

(via Dan Hon)