MIT's AI risk database exposes 700+ ways AI could ruin your life

The AI Risk Repository is a new, publicly available database system compiled by MIT researchers that catalogs the over 700 (and counting) risks of using generative AI.

From the abstract:

The risks posed by Artificial Intelligence (AI) are of considerable concern to academics, auditors, policymakers, AI companies, and the public. However, a lack of shared understanding of AI risks can impede our ability to comprehensively discuss, research, and react to them. This paper addresses this gap by creating an AI Risk Repository to serve as a common frame of reference. This comprises a living database of 777 risks extracted from 43 taxonomies, which can be filtered based on two overarching taxonomies and easily accessed, modified, and updated via our website and online spreadsheets.

[…]

The AI Risk Repository is, to our knowledge, the first attempt to rigorously curate, analyze, and extract AI risk frameworks into a publicly accessible, comprehensive, extensible, and categorized risk database. This creates a foundation for a more coordinated, coherent, and complete approach to defining, auditing, and managing the risks posed by AI systems.

The website also includes some comprehensive details about how the researchers developed their taxonomy and definitions of what, exactly, qualifies as an AI risk. But if you don't want to dig into the academicese, MIT Technology Review also summarized it:

The team combed through peer-reviewed journal articles and preprint databases that detail AI risks. The most common risks centered around AI system safety and robustness (76%), unfair bias and discrimination (63%), and compromised privacy (61%). Less common risks tended to be more esoteric, such as the risk of creating AI with the ability to feel pain or to experience something akin to "death." 

The database also shows that the majority of risks from AI are identified only after a model becomes accessible to the public. Just 10% of the risks studied were spotted before deployment. 

Of course, there's a question of whether any Silicon Valley startups will actually consult this list before developing some hokey new generative AI feature to dazzle their investors with before laying off a bunch of workers for absolutely no good reason.

Watch the video.

A new public database lists all the ways AI could go wrong [Scott J. Mulligan / MIT Technology Review]

AI Risk Repository [Massachusetts Institute of Technology]