Ethics and AI: all models are wrong, some are useful, and some of those are good

The old stats adage goes: "All models are wrong, but some models are useful." In this 35 minute presentation from the O"Reilly Open Data Science Conference, data ethicist Abe Gong from Aspire Health provides a nuanced, meaningful, accessible and eminently actionable overview of the ways that ethical considerations can be incorporated into the design of powerful algorithms. Read the rest

Uber uses data-mining to identify and block riders who may be cops, investigators or regulators

Greyball is Uber's codename for a program that tries to predict which new signups are secretly cops, regulators or investigators who could make trouble for the company, deployed in "Boston, Paris and Las Vegas, and in countries like Australia, China and South Korea" where the company was fighting with the authorities. Read the rest

Google's troll-fighting AI can be defeated by typos

Jigsaw is a "wildly ambitious" Google spin-off research unit that recently released Perspective, a machine-learning system designed to identify argumentative, belittling and meanspirited online conversation. Within days of its release, independent researchers have published a paper demonstrating a way of tricking Perspective into trusting ugly messages, just by introducing human-readable misspellings into their prose. Read the rest

UK Parliament to hold inquiry into algorithmic transparency

The House of Commons Science and Technology Committee put out a public call for suggestions on subjects it should investigate and one of the three winning pitches came from Stephanie Mathisen, campaigns and policy officer at Sense about Science, who suggested an inquiry into transparency in algorithmic decision-making. Read the rest

Wearing an activity tracker gives insurance companies the data they need to discriminate against people like you

Many insurers offer breaks to people who wear activity trackers that gather data on them; as Cathy "Mathbabe" O'Neil points out, the allegedly "anonymized' data-collection is trivial to re-identify (so this data might be used against you), and, more broadly, the real business model for this data isn't improving your health outcomes -- it's dividing the world into high-risk and low-risk people, so insurers can charge people more. Read the rest

How to find out what Trump's favorite big data machine knows about you

Cambridge Analytica is a dirty, Dementor-focused big data research outfit that provided the analysis and psych profiles that the Trump campaign used in its electioneering; because its parent company is in the UK, it is required (under EU law) to send you its dossier on you for £10. Read the rest

Skynet: Trump campaigned like a crappy AI, and now he's governing like one, too

Cathy "Weapons of Math Destruction" O'Neil nailed Trump's tactics when she compared him to a busted machine-learning algorithm, throwing a ton of random ideas out, listening for his base to cheer the ones that excited their worst instincts, and then doubling down on them. Read the rest

Trump's big data "secret sauce" sorcery - a much-needed reality check

An article that went viral last week attributed Trump's Electoral College victory to the dark big data sorcery of Cambridge Analytica, a dirty, dementor-focused big data company that specializes in political campaigns. Read the rest

Automated book-culling software drives librarians to create fake patrons to "check out" endangered titles

Two employees at the East Lake County Library created a fictional patron called Chuck Finley -- entering fake driver's license and address details into the library system -- and then used the account to check out 2,361 books over nine months in 2016, in order to trick the system into believing that the books they loved were being circulated to the library's patrons, thus rescuing the books from automated purges of low-popularity titles. Read the rest

Indiana's "educational achievement" bonuses: teachers in rich schools get 20x more than those in poor districts

Indiana is one of many GOP-led states that assume that the poor performance of schools in poor neighborhoods is the fault of bad teaching -- and not, say, systemic poverty, the absence of funds raised by rich parents, hunger, mass incarceration -- and so teachers are offered bonuses for "improving" their students' outcomes, which generally means their standardized test scores (since presumptively bad teachers can't be trusted to evaluate their students' qualitative improvements). Read the rest

Datification: the ideology that says that data isn't ideological

Cennydd Bowles's essay on "Datafication and ideological blindness" attacks the business/product-design vogue to base every decision on "data" on the grounds that data reflects objective truth, free from bias. Read the rest

Michigan Governor Rick Snyder's MIDAS program had a 93% error rate and falsely accused 20,000 workers of unemployment fraud

Under the cruel austerity of Michigan governor Rick Snyder -- whose policies led to the mass-poisonings of children in Flint -- any claims for unemployment insurance were vigorously investigated, with the state operating on the assumption that any worker who claimed a benefit was probably committing fraud. Read the rest

A checklist for figuring out whether your algorithm is a "weapon of math destruction"

The Data & Society institute (dedicated to critical, interdisciplinary perspectives on big data) held an online seminar devoted to Cathy O'Neil's groundbreaking book Weapons of Math Destruction, which showed how badly designed algorithmic decision-making systems can create, magnify and entrench the social problems they're supposed to solve, perpetuating inequality, destabilizing the economy, and making a small number of people very, very rich. Read the rest

Washington data scientists: a chance to figure out how to use your skills for good

A pair of data-scientists in Washington State are convening a meeting of "Data Scientists, Data Science Enthusiasts, and Advocates for Civic Liberties and Social Justice" who want to figure out how "Data Scientists & IT Professionals use their expertise to help answer the current human questions which social and policy-based organizations are currently struggling to address?" Read the rest

Superstar academic economists charge $1000+/hr to defend disastrous corporate megamergers

In 1977 Richard Posner (then a prof at the University of Chicago's notorious ultra-libertarian school; now a federal judge) teamed up with an economist and law student to form Lexecon, which has since grown to a firm worth more than $130,000,000, whose major business is to serve as intellectual guns-for-hire who will produce plausible-seeming economic models defending giant corporate mergers against anti-trust regulators. Read the rest

Though crime happens everywhere, predictive policing tools send cops to poor/black neighborhoods

Researchers from the Human Rights Data Analysis Group (previously) reimplemented the algorithm Predpol predictive policing system that police departments around America have spent a fortune on in order to find out where to set their patrols, and fed it Oakland's 2010 arrest data, then asked it to predict where the crime would be in 2011. Read the rest

The AI Now Report: social/economic implications of near-future AI

The National Economic Council convened a symposium at NYU's Information Law Institute in July, and they've released their report: 25 crisp (if slightly wonky) pages on how AI could increase inequality, erode accountability, and lead us into temptation -- along with recommendations for how to prevent this, from involving marginalized and displaced people in AI oversight; to increasing the diversity of AI researchers; to modifying the Computer Fraud and Abuse Act and Digital Millennium Copyright Act to clarify that neither stands in the way of independent auditing of AI systems. Read the rest

More posts