Automated book-culling software drives librarians to create fake patrons to "check out" endangered titles

Two employees at the East Lake County Library created a fictional patron called Chuck Finley -- entering fake driver's license and address details into the library system -- and then used the account to check out 2,361 books over nine months in 2016, in order to trick the system into believing that the books they loved were being circulated to the library's patrons, thus rescuing the books from automated purges of low-popularity titles. Read the rest

Indiana's "educational achievement" bonuses: teachers in rich schools get 20x more than those in poor districts

Indiana is one of many GOP-led states that assume that the poor performance of schools in poor neighborhoods is the fault of bad teaching -- and not, say, systemic poverty, the absence of funds raised by rich parents, hunger, mass incarceration -- and so teachers are offered bonuses for "improving" their students' outcomes, which generally means their standardized test scores (since presumptively bad teachers can't be trusted to evaluate their students' qualitative improvements). Read the rest

Datification: the ideology that says that data isn't ideological

Cennydd Bowles's essay on "Datafication and ideological blindness" attacks the business/product-design vogue to base every decision on "data" on the grounds that data reflects objective truth, free from bias. Read the rest

Michigan Governor Rick Snyder's MIDAS program had a 93% error rate and falsely accused 20,000 workers of unemployment fraud

Under the cruel austerity of Michigan governor Rick Snyder -- whose policies led to the mass-poisonings of children in Flint -- any claims for unemployment insurance were vigorously investigated, with the state operating on the assumption that any worker who claimed a benefit was probably committing fraud. Read the rest

A checklist for figuring out whether your algorithm is a "weapon of math destruction"

The Data & Society institute (dedicated to critical, interdisciplinary perspectives on big data) held an online seminar devoted to Cathy O'Neil's groundbreaking book Weapons of Math Destruction, which showed how badly designed algorithmic decision-making systems can create, magnify and entrench the social problems they're supposed to solve, perpetuating inequality, destabilizing the economy, and making a small number of people very, very rich. Read the rest

Washington data scientists: a chance to figure out how to use your skills for good

A pair of data-scientists in Washington State are convening a meeting of "Data Scientists, Data Science Enthusiasts, and Advocates for Civic Liberties and Social Justice" who want to figure out how "Data Scientists & IT Professionals use their expertise to help answer the current human questions which social and policy-based organizations are currently struggling to address?" Read the rest

Superstar academic economists charge $1000+/hr to defend disastrous corporate megamergers

In 1977 Richard Posner (then a prof at the University of Chicago's notorious ultra-libertarian school; now a federal judge) teamed up with an economist and law student to form Lexecon, which has since grown to a firm worth more than $130,000,000, whose major business is to serve as intellectual guns-for-hire who will produce plausible-seeming economic models defending giant corporate mergers against anti-trust regulators. Read the rest

Though crime happens everywhere, predictive policing tools send cops to poor/black neighborhoods

Researchers from the Human Rights Data Analysis Group (previously) reimplemented the algorithm Predpol predictive policing system that police departments around America have spent a fortune on in order to find out where to set their patrols, and fed it Oakland's 2010 arrest data, then asked it to predict where the crime would be in 2011. Read the rest

The AI Now Report: social/economic implications of near-future AI

The National Economic Council convened a symposium at NYU's Information Law Institute in July, and they've released their report: 25 crisp (if slightly wonky) pages on how AI could increase inequality, erode accountability, and lead us into temptation -- along with recommendations for how to prevent this, from involving marginalized and displaced people in AI oversight; to increasing the diversity of AI researchers; to modifying the Computer Fraud and Abuse Act and Digital Millennium Copyright Act to clarify that neither stands in the way of independent auditing of AI systems. Read the rest

Jigsaw: "wildly ambitious" Google spin-out aimed at tackling "surveillance, extremist indoctrination, censorship"

Technologists have a dismal pattern: when it comes to engineering challenges ("build a global-scale comms platform") they rub their hands together with excitement; when it comes to the social challenges implied by the engineering ones ("do something about trolls") they throw their hands up and declare the problem to be too hard to solve. Read the rest

Rules for trusting "black boxes" in algorithmic control systems

Tim O'Reilly writes about the reality that more and more of our lives -- including whether you end up seeing this very sentence! -- is in the hands of "black boxes": algorithmic decision-makers whose inner workings are a secret from the people they affect. Read the rest

Blackballed by machine learning: how algorithms can destroy your chances of getting a job

The Guardian's published a long excerpt from Cathy O'Neil's essential new book, Weapons of Math Destruction, in which O'Neil describes the way that shoddy machine-learning companies have come to dominate waged employment hiring, selling their dubious products to giant companies that use them to decide who can and can't work. Read the rest

Sampling bias: how a machine-learning beauty contest awarded nearly all prizes to whites

If you've read Cathy O'Neil's Weapons of Math Destruction (you should, right NOW), then you know that machine learning can be a way to apply a deadly, nearly irrefutable veneer of objectivity to our worst, most biased practices. Read the rest

Predictive policing predicts police harassment, not crime

In Chicago, the "Heat List" system is used to direct policing resources, based on data-mining of social media to identify potential gang-members; the model tells the cops where to go and who to arrest, and is supposed to reduce both violent crime and the likelihood that suspects themselves will be killed -- but peer-reviewed analysis (Scihub mirror) of the program shows that while being on the Heat List increases your chances of being harassed and arrested by Chicago PD, it does not improve crime rates. Read the rest

Trump is an object lesson in the problems of machine learning

Trump's algorithm is to say semi-random things until his crowd roars its approval, then he iteratively modifies those statements, seeking more and more approval, until he maxes out and tries a new tack. Read the rest

Algorithmic risk-assessment: hiding racism behind "empirical" black boxes

Courts around America and the world increasingly rely on software based risk-assessment software in determining bail and sentencing; the systems require the accused to answer more than a hundred questions which are fed into a secret model that spits out a single-digit "risk score" that courts use to decide who to lock up, and for how long. Read the rest