The Gates Foundation spent $775m on a Big Data education project that was worse than useless

Kudos to the Gates Foundation, seriously: after spending $775m on the Intensive Partnerships for Effective Teaching, a Big Data initiative to improve education for poor and disadvantaged students, they hired outside auditors to evaluate the program's effectiveness, and published that report, even though it shows that the approach did no good on balance and arguably caused real harms to teachers and students. Read the rest

ICE hacked its algorithmic risk-assessment tool so it recommended detention for everyone

One of the more fascinating and horrible details in Reuters' thoroughly fascinating and horrible long-form report on Trump's cruel border policies is this nugget: ICE hacked the risk-assessment tool it used to decide whom to imprison so that it recommended that everyone should be detained. Read the rest

Thanks to 2016's trade secret law and algorithmic justice, America's courts have become AI-Kafka nightmares

In 2014, the Alice decision made it much harder to patent software in the USA; in 2016, Congress passed the Defend Trade Secrets Act, creating the first federal trade secrets statute: the result of these two developments is that software companies aggressively switched from patents to trade secrets as a means of controlling competition and limiting inspection and criticism of their products. Read the rest

Welsh police deployed facial recognition tech with a 92% false positive rate, but they're sure it's fine

The South Wales Police deployed a facial recognition technology at the June 2017 Champions League soccer final in Cardiff, and 92% of the people identified by the system as matches for suspiciousness were false positives. Read the rest

Should I use an algorithm here? EFF's 5-point checklist

The Electronic Frontier Foundation's Jamie Williams and Lena Gunn have drawn up an annotated five-point list of questions to ask yourself before using a machine-learning algorithm to make predictions and guide outcomes. Read the rest

Palantir has figured out how to make money by using algorithms to ascribe guilt to people, now they're looking for new customers

In 2009, JP Morgan Chase's "special ops" guy was an ex-Secret Service agent called Peter Cavicchia III, and he retained Palantir to spy on everyone in the company to find "insider threats"; even getting the bank to invest in Palantir. Read the rest

UK police train machine-learning model using Experian data that stereotypes people based on garden size, first names, and postal codes

The police in Durham, England bought a license to the "Mosiac" dataset from the credit bureau Experian, which includes data on 50,000,000 Britons, in order to train a machine learning system called HART ("Harm Assessment Risk Tool") that tries to predict whether someone will reoffend. Read the rest

Automating Inequality: using algorithms to create a modern "digital poor-house"

Weeks before the publication of Virginia Eubanks's new book Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, knowledgeable friends were already urging me to read it, which was a no-brainer, having followed Eubanks's work for years.

Anyone who claims that machine learning will save money in high-stakes government decision-making is lying

danah boyd (previously) writes enthusiastically about Virginia Eubanks's forthcoming book, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, which she calls "the best ethnography I’ve read in years," "on par with Barbara Ehrenreich’s 'Nickel and Dimed' or Matthew Desmond’s 'Evicted.'" Read the rest

How machine learning engineers can detect and debug algorithmic bias

Ben Lorica, O'Reilly's chief data scientist, has posted slides and notes from his talk at last December's Strata Data Conference in Singapore, "We need to build machine learning tools to augment machine learning engineers." Read the rest

Two years later, Google solves 'racist algorithm' problem by purging 'gorilla' label from image classifier

In 2015, a black software developer named Jacky Alciné revealed that the image classifier used by Google Photos was labeling black people as "gorillas."

Read the rest

Statistical proof that voter ID laws are racially discriminatory

In ADGN: An Algorithm for Record Linkage Using Address, Date of Birth, Gender, and Name, newly published in Statistics and Public Policy, a pair of researchers from Harvard and Tufts build a statistical model to analyze the impact of the voter ID laws passed in Republican-controlled states as part of a wider voter suppression project that was explicitly aimed at suppressing the votes of racialised people, historically likely to vote Democrat. Read the rest

The DHS has illegally stuffed America's airports full of $1B worth shitty, malfing facial-recognition tech

More than a dozen major US airports are now covered in facial-recognition cameras, installed by the DHS to scan people departing on international flights without the legally mandated federal review process. Read the rest

"Friendly" apps are good at maximizing engagement, but their context-blindness is a cesspit of algorithmic cruelty

Designers use metrics and a/b splitting experiments to maximize engagement with their products, seeking out the phrases that trigger emotional responses in users -- like a smart scale that congratulates you on losing weight -- but these systems are context-blind, so they are unable to distinguish between people who might be traumatized by their messages (say, a woman who's just miscarried late in her pregnancy, being congratulated on her "weight loss"). Read the rest

I Can't Breathe: Matt Taibbi's scorching book on the murder of Eric Garner and the system that let the killers get away with it

Matt Taibbi is one of the best political writers working in the USA today, someone who can use the small, novelistic details of individuals' lives illuminate the vast, systemic problems that poison our lives and shame our honor; his 2014 book The Divide conducts a wide-ranging inquiry into the impunity of corporate criminals and the kafkaesque injustices visited on the poor people they victimize; in I Can't Breathe: A Killing on Bay Street, Taibbi narrows his focus to the police murder of Eric Garner, a Staten Island fixture and father, and the system that put murderers in uniform in his path.

What is bias in machine learning, when does it matter, and what can we do about it?

"The Trouble with Bias," Kate Crawford's (previously) keynote at the 2017 Neural Information Processing Systems is a brilliant tour through different ways of thinking about what bias is, and when we should worry about it, specifically in the context of machine learning systems and algorithmic decision making -- the best part is at the end, where she describes what we should do about this stuff, and where to get started. (via 4 Short Links) Read the rest

Coalition of 100+ tech groups and leaders warn the DHS that "extreme vetting" software will be a worse-than-useless, discriminatory nightmare

In a pair of open letters to Letter to The Honorable Elaine C. Duke, Acting Secretary of Homeland, a coalition of more than 100 tech liberties groups and leading technology experts urged the DHS to abandon its plan to develop a black-box algorithmic system for predicting whether foreigners coming to the USA to visit or live are likely to be positive contributors or risks to the nation. Read the rest

More posts