The Gates Foundation spent $775m on a Big Data education project that was worse than useless

Kudos to the Gates Foundation, seriously: after spending $775m on the Intensive Partnerships for Effective Teaching, a Big Data initiative to improve education for poor and disadvantaged students, they hired outside auditors to evaluate the program's effectiveness, and published that report, even though it shows that the approach did no good on balance and arguably caused real harms to teachers and students.

A roadmap for AI policy questions

Robot law pioneer Ryan Calo (previously) has published a "roadmap" for an "artificial intelligence policy…to help policymakers, investors, technologists, scholars, and students understand the contemporary policy environment around AI at least well enough to initiative their own exploration."

Big data + private health insurance = game over

Once big data systems agglomerate enough data about you to predict whether you are likely to get sick or badly injured, insurers will be able to deny coverage (or charge so much for it that it amounts to the same thing) to anyone who is likely to get sick, forcing everyone who might ever need insurance into medical bankruptcy, and turning Medicaid into a giant "high-risk pool" that taxpayers foot the bill for.

Algorithmic decision-making: an arms-race between entropy, programmers and referees

Nesta's Juan Mateos-Garcia proposes that "entropic forces" make algorithmic decision-making tools worse over time, requiring that they be continuously maintained and improved (this is also a key idea from Cathy O'Neil's Weapons of Math Destruction: a machine-learning system is only honest if someone is continuously matching its predictions to reality and refining its model based on the mistakes it makes).

Wearing an activity tracker gives insurance companies the data they need to discriminate against people like you

Many insurers offer breaks to people who wear activity trackers that gather data on them; as Cathy "Mathbabe" O'Neil points out, the allegedly "anonymized' data-collection is trivial to re-identify (so this data might be used against you), and, more broadly, the real business model for this data isn't improving your health outcomes — it's dividing the world into high-risk and low-risk people, so insurers can charge people more.

A checklist for figuring out whether your algorithm is a "weapon of math destruction"

The Data & Society institute (dedicated to critical, interdisciplinary perspectives on big data) held an online seminar devoted to Cathy O'Neil's groundbreaking book Weapons of Math Destruction, which showed how badly designed algorithmic decision-making systems can create, magnify and entrench the social problems they're supposed to solve, perpetuating inequality, destabilizing the economy, and making a small number of people very, very rich.

Weapons of Math Destruction: invisible, ubiquitous algorithms are ruining millions of lives

I've been writing about the work of Cathy "Mathbabe" O'Neil for years: she's a radical data-scientist with a Harvard PhD in mathematics, who coined the term "Weapons of Math Destruction" to describe the ways that sloppy statistical modeling is punishing millions of people every day, and in more and more cases, destroying lives. Today, O'Neil brings her argument to print, with a fantastic, plainspoken, call to arms called (what else?) Weapons of Math Destruction.

Predictive policing predicts police harassment, not crime

In Chicago, the "Heat List" system is used to direct policing resources, based on data-mining of social media to identify potential gang-members; the model tells the cops where to go and who to arrest, and is supposed to reduce both violent crime and the likelihood that suspects themselves will be killed — but peer-reviewed analysis (Scihub mirror) of the program shows that while being on the Heat List increases your chances of being harassed and arrested by Chicago PD, it does not improve crime rates.