Algorithms reflect bias: that of the programmers and the data sets, a new kind of virus infecting machine learning. Machine learning can, through algorithmic oppression, replicate social prejudices and structures of inequality. These are conclusions that creative researchers have emphasized when studying algorithms and social life. But what is to be done as algorithms and AI "governs access to information, opportunity, and freedom"? What is to be done when the foundations of meaning-making are governed by programmers, their imaginations, the corporations, military and police forces, and other centers of power that employ these imaginations?
The documentary film Coded Bias engages these and other questions.
"Coded Bias illuminates our mass misconceptions about AI and emphasizes the urgent need for legislative protection, and follows the Algorithmic Justice League's journey to push for the first-ever legislation in the U.S to place limits to facial recognition technology."
"When MIT researcher, poet and computer scientist Joy Buolamwini uncovers racial and gender bias in AI systems sold by big tech companies, she embarks on a journey alongside pioneering women sounding the alarm about the dangers of unchecked artificial intelligence that impacts us all. Through Joy's transformation from scientist to steadfast advocate and the stories of everyday people experiencing technical harms, Coded Bias sheds light on the threats A.I. poses to civil rights and democracy."
You can watch the trailer for Coded Bias here.
AJL's mission is to raise public awareness about the impacts of AI, equip advocates with resources to bolster campaigns, build the voice and choice of the most impacted communities, and galvanize researchers, policymakers, and industry practitioners to prevent AI harms.
Through a combination of art, research, policy guidance and media advocacy, the Algorithmic Justice League is leading a cultural movement towards equitable and accountable AI. This requires us to look at how AI systems are developed and to actively prevent the harmful use of AI systems. We aim to empower communities and galvanize decision makers to take action that mitigates the harms and biases of AI."
To achieve equitable AI, AJL focuses on three areas: agency and control, affirmative consent, and centering justice by focusing on monitoring to what ends AI is employed. Accountable AI means meaningful transparency, continuous participatory and consensual oversight, and attention to redressing harm.
The website is a clearinghouse for personal stories, research reports, media, journalism, and legislation tracking.