Algorithms try to channel us into repeating our lives

Molly Sauter (previously) describes in gorgeous, evocative terms how the algorithms in our life try to funnel us into acting the way we always have, or, failing that, like everyone else does. Read the rest

Michigan Governor Rick Snyder's MIDAS program had a 93% error rate and falsely accused 20,000 workers of unemployment fraud

Under the cruel austerity of Michigan governor Rick Snyder -- whose policies led to the mass-poisonings of children in Flint -- any claims for unemployment insurance were vigorously investigated, with the state operating on the assumption that any worker who claimed a benefit was probably committing fraud. Read the rest

We need a new science of measurement to figure out when AI is working and why it fails

Writing on Medium, AI researcher Kate Crawford (previously) and Simply Secure (previously) co-founder Meredith Whittaker make the case for a new scholarly discipline that "measures and assesses the social and economic effects of current AI systems." Read the rest

Predictive policing predicts police harassment, not crime

In Chicago, the "Heat List" system is used to direct policing resources, based on data-mining of social media to identify potential gang-members; the model tells the cops where to go and who to arrest, and is supposed to reduce both violent crime and the likelihood that suspects themselves will be killed -- but peer-reviewed analysis (Scihub mirror) of the program shows that while being on the Heat List increases your chances of being harassed and arrested by Chicago PD, it does not improve crime rates. Read the rest

Forget Skynet: AI is already making things terrible for people who aren't rich white dudes

Kate Crawford (previously) takes to the New York Times's editorial page to ask why rich white guys act like the big risk of machine-learning systems is that they'll evolve into Skynet-like apex-predators that subjugate the human race, when there are already rampant problems with machine learning: algorithmic racist sentencing, algorithmic, racist and sexist discrimination, algorithmic harassment, algorithmic hiring bias, algorithmic terrorist watchlisting, algorithmic racist policing, and a host of other algorithmic cruelties and nonsense, each one imbued with unassailable objectivity thanks to its mathematical underpinnings. Read the rest

Algorithmic risk-assessment: hiding racism behind "empirical" black boxes

Courts around America and the world increasingly rely on software based risk-assessment software in determining bail and sentencing; the systems require the accused to answer more than a hundred questions which are fed into a secret model that spits out a single-digit "risk score" that courts use to decide who to lock up, and for how long. Read the rest

Paypal refuses to deliver online purchases to UK addresses containing "Isis"

The Isis River, which flows through the English university city of Oxford, has inspired many place names that include "Isis," including "Isis Close." Read the rest

Nintendo claims ownership over fans' Minecraft/Mario mashups

Nintendo continues its long-running campaign of legal harassment against its biggest fans: this time, they're targeting fan-videos showing gameplay from the official, licensed Mario/Minecraft mashup pack for the Wii U. Read the rest

Algorithmic cruelty: when Gmail adds your harasser to your speed-dial

Inbox by Gmail combs through your email looking for frequent correspondents and puts the people who email you the most in a "speed dial" sidebar (that you can't edit) that puts their names and pictures front-and-center for you every time you go to your email. Read the rest