When the HR department is a robotic phrenologist: "face-scanning algorithm" gains popularity as a job-applicant screener

Hirevue is an "AI" company that companies contract with to screen job applicants: it conducts an hour-long videoconference session with applicants, analyzing their facial expressions, word-choices and other factors (the company does not actually explain what these are, nor have they ever subjected their system to independent scrutiny) and makes recommendations about who should get the job. Read the rest

Police cameras to be augmented with junk-science "microexpression" AI lie-detectors

The idea that you can detect lies by analyzing "microexpressions" has absorbed billions in spending by police forces and security services, despite the fact that it's junk science that performs worse than a coin-toss. Read the rest

Pentagon reassures public that its autonomous robotic tank adheres to "legal and ethical standards" for AI-driven killbots

The Pentagon is seeking bids to improve its Advanced Targeting and Lethality Automated System (ATLAS) so that it can "acquire, identify, and engage targets at least 3X faster than the current manual process." Read the rest

London councils plan to slash benefit payments with an "anti-fraud" system known to have a 20% failure rate

BAE developed the London Counter Fraud Hub, which uses machine learning systems to detect benefit fraud; after trials in the boroughs of Camden, Ealing, Croydon and Islington, the system has been approved for regular use, despite an admitted 20% failure rate. Read the rest

Beyond GIGO: how "predictive policing" launders racism, corruption and bias to make them seem empirical

"Predictive policing" is the idea that you can feed crime stats to a machine-learning system and it will produce a model that can predict crime. It is garbage. Read the rest

Customizable ethics checklists for Big Data researchers

Deon is a project to create automated "ethics checklists" for data science projects; by default, running the code creates a comprehensive checklist covering data collection and storage, modeling and deployment: the checklist items aren't specific actions, they're "meant to provoke discussion among good-faith actors who take their ethical responsibilities seriously. Because of this, most of the items are framed as prompts to discuss or consider. Teams will want to document these discussions and decisions for posterity." Read the rest

Stet, a gorgeous, intricate, tiny story of sociopathic automotive vehicles

Sarah Gailey's micro-short-story STET is a beautiful piece of innovative storytelling that perfectly blends the three ingredients for a perfect piece of science fiction: sharply observed technological speculation that reflects on our present moment; a narrative arc for characters we sympathize with; and a sting in the tail that will stay with you long after the story's been read. Read the rest

Amazon trained a sexism-fighting, resume-screening AI with sexist hiring data, so the bot became sexist

Some parts of machine learning are incredibly esoteric and hard to grasp, surprising even seasoned computer science pros; other parts of it are just the same problems that programmers have contended with since the earliest days of computation. The problem Amazon had with its machine-learning-based system for screening job applicants was the latter. Read the rest

Good news: California ends cash bail! Bad news: It's been replaced with a black-box algorithm.

Cash bail has turned American jails into debtors' prisons, where the wealthy can go about their business while awaiting trial, and the poor can languish for months or even years in jail because they can't make bail (in practice, they generally plead guilty, regardless of their innocence). Read the rest

The ACLU showed that Amazon's facial recognition system thinks members of Congress are felons, so now Congress is taking action

After learning that Amazon was pushing the use of Rekognition, its facial recognition tool, for use in policing (a global phenomenon that is gaining momentum despite the material unsuitability of these tools in policing contexts), the ACLU of Northern California had a brainwave: they asked Rekognition to evaluate the faces of the 115th Congress of the United States. Read the rest

The Gates Foundation spent $775m on a Big Data education project that was worse than useless

Kudos to the Gates Foundation, seriously: after spending $775m on the Intensive Partnerships for Effective Teaching, a Big Data initiative to improve education for poor and disadvantaged students, they hired outside auditors to evaluate the program's effectiveness, and published that report, even though it shows that the approach did no good on balance and arguably caused real harms to teachers and students. Read the rest

ICE hacked its algorithmic risk-assessment tool so it recommended detention for everyone

One of the more fascinating and horrible details in Reuters' thoroughly fascinating and horrible long-form report on Trump's cruel border policies is this nugget: ICE hacked the risk-assessment tool it used to decide whom to imprison so that it recommended that everyone should be detained. Read the rest

Thanks to 2016's trade secret law and algorithmic justice, America's courts have become AI-Kafka nightmares

In 2014, the Alice decision made it much harder to patent software in the USA; in 2016, Congress passed the Defend Trade Secrets Act, creating the first federal trade secrets statute: the result of these two developments is that software companies aggressively switched from patents to trade secrets as a means of controlling competition and limiting inspection and criticism of their products. Read the rest

Welsh police deployed facial recognition tech with a 92% false positive rate, but they're sure it's fine

The South Wales Police deployed a facial recognition technology at the June 2017 Champions League soccer final in Cardiff, and 92% of the people identified by the system as matches for suspiciousness were false positives. Read the rest

Should I use an algorithm here? EFF's 5-point checklist

The Electronic Frontier Foundation's Jamie Williams and Lena Gunn have drawn up an annotated five-point list of questions to ask yourself before using a machine-learning algorithm to make predictions and guide outcomes. Read the rest

Palantir has figured out how to make money by using algorithms to ascribe guilt to people, now they're looking for new customers

In 2009, JP Morgan Chase's "special ops" guy was an ex-Secret Service agent called Peter Cavicchia III, and he retained Palantir to spy on everyone in the company to find "insider threats"; even getting the bank to invest in Palantir. Read the rest

UK police train machine-learning model using Experian data that stereotypes people based on garden size, first names, and postal codes

The police in Durham, England bought a license to the "Mosiac" dataset from the credit bureau Experian, which includes data on 50,000,000 Britons, in order to train a machine learning system called HART ("Harm Assessment Risk Tool") that tries to predict whether someone will reoffend. Read the rest

More posts