Cathy "Weapons of Math Destruction" O'Neil wants us to have empathy for Big Tech CEOs like Mark Zuckerberg and Jack Dorsey, who are "monumentally screwed, because they have no idea how to tame the monsters they have created."
Kudos to the Gates Foundation, seriously: after spending $775m on the Intensive Partnerships for Effective Teaching, a Big Data initiative to improve education for poor and disadvantaged students, they hired outside auditors to evaluate the program's effectiveness, and published that report, even though it shows that the approach did no good on balance and arguably caused real harms to teachers and students.
The Electronic Frontier Foundation's Jamie Williams and Lena Gunn have drawn up an annotated five-point list of questions to ask yourself before using a machine-learning algorithm to make predictions and guide outcomes.
Weeks before the publication of Virginia Eubanks's new book Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, knowledgeable friends were already urging me to read it, which was a no-brainer, having followed Eubanks's work for years.
There are 50 hospitals on 5 continents that use Watson for Oncology, an IBM product that charges doctors to ingest their cancer patients' records and then make treatment recommendations and suggest journal articles for further reading.
Robot law pioneer Ryan Calo (previously) has published a "roadmap" for an "artificial intelligence policy…to help policymakers, investors, technologists, scholars, and students understand the contemporary policy environment around AI at least well enough to initiative their own exploration."
Once big data systems agglomerate enough data about you to predict whether you are likely to get sick or badly injured, insurers will be able to deny coverage (or charge so much for it that it amounts to the same thing) to anyone who is likely to get sick, forcing everyone who might ever need insurance into medical bankruptcy, and turning Medicaid into a giant "high-risk pool" that taxpayers foot the bill for.
Nesta's Juan Mateos-Garcia proposes that "entropic forces" make algorithmic decision-making tools worse over time, requiring that they be continuously maintained and improved (this is also a key idea from Cathy O'Neil's Weapons of Math Destruction: a machine-learning system is only honest if someone is continuously matching its predictions to reality and refining its model based on the mistakes it makes).
The best way to fight gerrymandering is to prove to courts that electoral districts have been unfairly formed, a tactic that's been used successfully in places like North Carolina; but for this to work, you need good demographic data to show that the district is unfair, and for that, you need an accurate census.
Many insurers offer breaks to people who wear activity trackers that gather data on them; as Cathy "Mathbabe" O'Neil points out, the allegedly "anonymized' data-collection is trivial to re-identify (so this data might be used against you), and, more broadly, the real business model for this data isn't improving your health outcomes — it's dividing the world into high-risk and low-risk people, so insurers can charge people more.
Cathy "Weapons of Math Destruction" O'Neil nailed Trump's tactics when she compared him to a busted machine-learning algorithm, throwing a ton of random ideas out, listening for his base to cheer the ones that excited their worst instincts, and then doubling down on them.
An article that went viral last week attributed Trump's Electoral College victory to the dark big data sorcery of Cambridge Analytica, a dirty, dementor-focused big data company that specializes in political campaigns.
Social scientist/cybersecurity expert Susan Landau (previously) and Cathy "Weapons of Math Destruction" O'Neil take to Lawfare to explain why it would be a dangerous mistake for the FBI to use machine learning-based chatbots to flush out potential terrorists online.
The Data & Society institute (dedicated to critical, interdisciplinary perspectives on big data) held an online seminar devoted to Cathy O'Neil's groundbreaking book Weapons of Math Destruction, which showed how badly designed algorithmic decision-making systems can create, magnify and entrench the social problems they're supposed to solve, perpetuating inequality, destabilizing the economy, and making a small number of people very, very rich.
It's been six years since I quite Facebook and not a day goes by that I don't realize that my life is better for it.
The Guardian's published a long excerpt from Cathy O'Neil's essential new book, Weapons of Math Destruction, in which O'Neil describes the way that shoddy machine-learning companies have come to dominate waged employment hiring, selling their dubious products to giant companies that use them to decide who can and can't work.
If you've read Cathy O'Neil's Weapons of Math Destruction (you should, right NOW), then you know that machine learning can be a way to apply a deadly, nearly irrefutable veneer of objectivity to our worst, most biased practices.
I've been writing about the work of Cathy "Mathbabe" O'Neil for years: she's a radical data-scientist with a Harvard PhD in mathematics, who coined the term "Weapons of Math Destruction" to describe the ways that sloppy statistical modeling is punishing millions of people every day, and in more and more cases, destroying lives. Today, O'Neil brings her argument to print, with a fantastic, plainspoken, call to arms called (what else?) Weapons of Math Destruction.
In Chicago, the "Heat List" system is used to direct policing resources, based on data-mining of social media to identify potential gang-members; the model tells the cops where to go and who to arrest, and is supposed to reduce both violent crime and the likelihood that suspects themselves will be killed — but peer-reviewed analysis (Scihub mirror) of the program shows that while being on the Heat List increases your chances of being harassed and arrested by Chicago PD, it does not improve crime rates.
Trump's algorithm is to say semi-random things until his crowd roars its approval, then he iteratively modifies those statements, seeking more and more approval, until he maxes out and tries a new tack.