We need a new science of measurement to figure out when AI is working and why it fails

Writing on Medium, AI researcher Kate Crawford (previously) and Simply Secure (previously) co-founder Meredith Whittaker make the case for a new scholarly discipline that "measures and assesses the social and economic effects of current AI systems."

The core issue here isn't that AI is worse than the existing human-led processes that serve to make predictions and assign rankings. Indeed, there's much hope that AI can be used to provide more objective assessments than humans, reducing bias and leading to better outcomes. The key concern is that AI systems are being integrated into key social institutions, even though their accuracy, and their social and economic effects, have not been rigorously studied or validated.

There needs to be a strong research field that measures and assesses the social and economic effects of current AI systems, in order to strengthen AI's positive impacts and mitigate its risks. By measuring the impacts of these technologies, we can strengthen the design and development of AI, assist public and private actors in ensuring their systems are reliable and accountable, and reduce the possibility of errors. By building an empirical understanding of how AI functions on the ground, we can establish evidence-led models for responsible and ethical deployment, and ensure the healthy growth of the AI field.

Artificial intelligence is hard to see