"Intellectual Debt": It's bad enough when AI gets its predictions wrong, but it's potentially WORSE when AI gets it right

Jonathan Zittrain (previously) is consistently a source of interesting insights that often arrive years ahead of their wider acceptance in tech, law, ethics and culture (2008's The Future of the Internet (and how to stop it) is surprisingly relevant 11 years later); in a new long essay on Medium (shorter version in the New Yorker), Zittrain examines the perils of the "intellectual debt" that we incur when we allow machine learning systems that make predictions whose rationale we don't understand, because without an underlying theory of those predictions, we can't know their limitations. Read the rest

AI is like a magic trick: amazing until it goes wrong, then revealed as a cheap and brittle effect

I used to be on the program committee for the O'Reilly Emerging Technology conferences; one year we decided to make the theme "magic" -- all the ways that new technologies were doing things that baffled us and blew us away. Read the rest

Rage Inside the Machine: an insightful, brilliant critique of AI's computer science, sociology, philosophy and economics

Rob Smith is an eminent computer scientist and machine learning pioneer whose work on genetic algorithms has been influential in both industry and the academy; now, in his first book for a general audience, Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All, Smith expertly draws connections between AI, neoliberalism, human bias, eugenics and far-right populism, and shows how the biases of computer science and the corporate paymasters have distorted our whole society. Read the rest

Machine learning classifiers are up to 20% less accurate when labeling photos from homes in poor countries

A new study from Facebook AI Research evaluates common machine-learning classifiers' ability to label photos of objects found in households in rich countries versus household objects from poor countries and finds that the models' performance lags significantly when being asked to classify the possessions of poor people. Read the rest

Mozilla's Internet Health Report: discriminatory AI, surveilling smart cities, ad-tech

Every year, the Mozilla Foundation releases a massive "Internet Health Report" summarizing the ways in which the internet is being used to both support and subvert human thriving; though these reports cover a wide range of topics, every year the foundation chooses a small number of themes to focus on. This year, they are Let's Ask More of AI; The Power of Cities and Rethinking Digital Ads. Read the rest

The Safe Face Pledge: an ethical code of conduct for facial recognition scientists and developers

The Safe Face Pledge launched last month as a "pledge to mitigate abuse of facial analysis technology," with four themes: "Show Value for Human Life, Dignity, and Rights;" "Address Harmful Bias"; "Facilitate Transparency"; and "Embed Commitments into Business Practices" (SAFE). Read the rest

Good news: California ends cash bail! Bad news: It's been replaced with a black-box algorithm.

Cash bail has turned American jails into debtors' prisons, where the wealthy can go about their business while awaiting trial, and the poor can languish for months or even years in jail because they can't make bail (in practice, they generally plead guilty, regardless of their innocence). Read the rest

Voice assistants suck, but they suck worse if you have an "accent"

Research into the shittiness of voice assistants zeroed in on a problem that many people were all-too-aware of: the inability of these devices to recognize "accented" speech ("accented" in quotes because there is no one formally correct English, and the most widely spoken English variants, such as Indian English, fall into this "accented" category). Read the rest

Facebook forced to drop "feature" that let advertisers block black people, old people and women

It's illegal to discriminate based on "protected classes," including "men and women on the basis of sex; any group which shares a common race, religion, color, or national origin; people over 40; and people with physical or mental handicaps" but from the earliest days of its self-serve ad platform, Facebook gave advertisers the ability to exclude people from ads for jobs, financial products, housing and other necessities based on these categories. Read the rest

Thanks to 2016's trade secret law and algorithmic justice, America's courts have become AI-Kafka nightmares

In 2014, the Alice decision made it much harder to patent software in the USA; in 2016, Congress passed the Defend Trade Secrets Act, creating the first federal trade secrets statute: the result of these two developments is that software companies aggressively switched from patents to trade secrets as a means of controlling competition and limiting inspection and criticism of their products. Read the rest

Garbage In, Garbage Out: machine learning has not repealed the iron law of computer science

Pete Warden writes convincingly about computer scientists' focus on improving machine learning algorithms, to the exclusion of improving the training data that the algorithms interpret, and how that focus has slowed the progress of machine learning. Read the rest

How machine learning engineers can detect and debug algorithmic bias

Ben Lorica, O'Reilly's chief data scientist, has posted slides and notes from his talk at last December's Strata Data Conference in Singapore, "We need to build machine learning tools to augment machine learning engineers." Read the rest