Customizable ethics checklists for Big Data researchers

Deon is a project to create automated "ethics checklists" for data science projects; by default, running the code creates a comprehensive checklist covering data collection and storage, modeling and deployment: the checklist items aren't specific actions, they're "meant to provoke discussion among good-faith actors who take their ethical responsibilities seriously. Because of this, most of the items are framed as prompts to discuss or consider. Teams will want to document these discussions and decisions for posterity." Read the rest

Stet, a gorgeous, intricate, tiny story of sociopathic automotive vehicles

Sarah Gailey's micro-short-story STET is a beautiful piece of innovative storytelling that perfectly blends the three ingredients for a perfect piece of science fiction: sharply observed technological speculation that reflects on our present moment; a narrative arc for characters we sympathize with; and a sting in the tail that will stay with you long after the story's been read. Read the rest

Amazon trained a sexism-fighting, resume-screening AI with sexist hiring data, so the bot became sexist

Some parts of machine learning are incredibly esoteric and hard to grasp, surprising even seasoned computer science pros; other parts of it are just the same problems that programmers have contended with since the earliest days of computation. The problem Amazon had with its machine-learning-based system for screening job applicants was the latter. Read the rest

Good news: California ends cash bail! Bad news: It's been replaced with a black-box algorithm.

Cash bail has turned American jails into debtors' prisons, where the wealthy can go about their business while awaiting trial, and the poor can languish for months or even years in jail because they can't make bail (in practice, they generally plead guilty, regardless of their innocence). Read the rest

The ACLU showed that Amazon's facial recognition system thinks members of Congress are felons, so now Congress is taking action

After learning that Amazon was pushing the use of Rekognition, its facial recognition tool, for use in policing (a global phenomenon that is gaining momentum despite the material unsuitability of these tools in policing contexts), the ACLU of Northern California had a brainwave: they asked Rekognition to evaluate the faces of the 115th Congress of the United States. Read the rest

The Gates Foundation spent $775m on a Big Data education project that was worse than useless

Kudos to the Gates Foundation, seriously: after spending $775m on the Intensive Partnerships for Effective Teaching, a Big Data initiative to improve education for poor and disadvantaged students, they hired outside auditors to evaluate the program's effectiveness, and published that report, even though it shows that the approach did no good on balance and arguably caused real harms to teachers and students. Read the rest

ICE hacked its algorithmic risk-assessment tool so it recommended detention for everyone

One of the more fascinating and horrible details in Reuters' thoroughly fascinating and horrible long-form report on Trump's cruel border policies is this nugget: ICE hacked the risk-assessment tool it used to decide whom to imprison so that it recommended that everyone should be detained. Read the rest

Thanks to 2016's trade secret law and algorithmic justice, America's courts have become AI-Kafka nightmares

In 2014, the Alice decision made it much harder to patent software in the USA; in 2016, Congress passed the Defend Trade Secrets Act, creating the first federal trade secrets statute: the result of these two developments is that software companies aggressively switched from patents to trade secrets as a means of controlling competition and limiting inspection and criticism of their products. Read the rest

Welsh police deployed facial recognition tech with a 92% false positive rate, but they're sure it's fine

The South Wales Police deployed a facial recognition technology at the June 2017 Champions League soccer final in Cardiff, and 92% of the people identified by the system as matches for suspiciousness were false positives. Read the rest

Should I use an algorithm here? EFF's 5-point checklist

The Electronic Frontier Foundation's Jamie Williams and Lena Gunn have drawn up an annotated five-point list of questions to ask yourself before using a machine-learning algorithm to make predictions and guide outcomes. Read the rest

Palantir has figured out how to make money by using algorithms to ascribe guilt to people, now they're looking for new customers

In 2009, JP Morgan Chase's "special ops" guy was an ex-Secret Service agent called Peter Cavicchia III, and he retained Palantir to spy on everyone in the company to find "insider threats"; even getting the bank to invest in Palantir. Read the rest

UK police train machine-learning model using Experian data that stereotypes people based on garden size, first names, and postal codes

The police in Durham, England bought a license to the "Mosiac" dataset from the credit bureau Experian, which includes data on 50,000,000 Britons, in order to train a machine learning system called HART ("Harm Assessment Risk Tool") that tries to predict whether someone will reoffend. Read the rest

Automating Inequality: using algorithms to create a modern "digital poor-house"

Weeks before the publication of Virginia Eubanks's new book Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, knowledgeable friends were already urging me to read it, which was a no-brainer, having followed Eubanks's work for years.

Anyone who claims that machine learning will save money in high-stakes government decision-making is lying

danah boyd (previously) writes enthusiastically about Virginia Eubanks's forthcoming book, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, which she calls "the best ethnography I’ve read in years," "on par with Barbara Ehrenreich’s 'Nickel and Dimed' or Matthew Desmond’s 'Evicted.'" Read the rest

How machine learning engineers can detect and debug algorithmic bias

Ben Lorica, O'Reilly's chief data scientist, has posted slides and notes from his talk at last December's Strata Data Conference in Singapore, "We need to build machine learning tools to augment machine learning engineers." Read the rest

Two years later, Google solves 'racist algorithm' problem by purging 'gorilla' label from image classifier

In 2015, a black software developer named Jacky Alciné revealed that the image classifier used by Google Photos was labeling black people as "gorillas."

Read the rest

Statistical proof that voter ID laws are racially discriminatory

In ADGN: An Algorithm for Record Linkage Using Address, Date of Birth, Gender, and Name, newly published in Statistics and Public Policy, a pair of researchers from Harvard and Tufts build a statistical model to analyze the impact of the voter ID laws passed in Republican-controlled states as part of a wider voter suppression project that was explicitly aimed at suppressing the votes of racialised people, historically likely to vote Democrat. Read the rest

More posts