"ai now institute"

Teenagers interview A.I experts about the future of thinking machines

The young journalists at YR Media (formerly Youth Radio) were curious about "what artificial intelligence means for race, art, and the apocalypse." So they asked the opinion of a a few experts, including tech journalist Alexis Madrigal, engineer Deb Raji of New York University's AI Now Institute, artist/programmer Sam Lavigne, and AI ethicisit Rachel Thomas. You can read (and listen to) bit from the lively conversation at the Youth Media feature "In the Black Mirror." Here's an excerpt:

RACE + BIAS

Deb Raji: There was a study released where we evaluated the commercial facial recognition systems that were deployed. And we said, "How well does this system work for different intersectional demographics?" So, how well does it work for darker skinned woman versus lighter skinned woman versus darker skinned men and lighter skinned men? And it figures that there was a 30 percent performance gap between lighter skinned men and darker skinned men, which is insane. For reference, usually you don't deploy a system that's performing at less than 95 percent accuracy.

Rachel Thomas: Another example of bias comes from some software that's used in many U.S. courtrooms. It gives people a rating of how likely they are to commit another crime. And it was found that this software has twice as high a false positive rate on black defendants compared to white defendants. So that means it was predicting that people were high risk even though they were not being rearrested. And so this is something that's really impacting people's lives because it was being used in sentencing decisions and bail decisions.

Read the rest

AI Now's annual report: stop doing "emotion detection"; stop "socially sensitive" facial recognition; make AI research diverse and representative -- and more

Every year, the AI Now Institute (previously) publishes a deep, thoughtful, important overview of where AI research is and the ethical gaps in AI's use, and makes a list of a dozen urgent recommendations for the industry, the research community, and regulators and governments. Read the rest

Google walkout organizers say they're being retaliated against for demanding ethical standards

Meredith Whittaker (previously) and Claire Stapleton were two of the principal organizers of the mass googler walkouts over the company's coverup and rewarding of sexual assault and harassment, as well as other Google employee actions over the company's involvement in drone warfare and Chinese censorship; now, in a widely circulated letter to colleagues, they say that they have been targeted for retaliation by Google management. Read the rest

Beyond GIGO: how "predictive policing" launders racism, corruption and bias to make them seem empirical

"Predictive policing" is the idea that you can feed crime stats to a machine-learning system and it will produce a model that can predict crime. It is garbage. Read the rest

The third annual AI Now report: 10 more ways to make AI safe for human flourishing

Every year, NYU's nonprofit, critical activist group AI Now releases a report on the state of AI, with ten recommendations for making machine learning systems equitable, transparent and fail-safe (2016, 2017); this year's report just published, written by a fantastic panel, including Meredith Whittaker (previously -- one of the leaders of the successful googler uprising over the company's contract to supply AI tools to the Pentagon's drone project); Kate Crawford (previously -- one of the most incisive critics of AI); Jason Schultz (previously -- a former EFF attorney now at NYU) and many others. Read the rest

Ten recommendations to make AI safe for humanity

A year ago, the AI Now Institute released its inaugural report on the near-future social and economic consequences of AI, drawing on input from a diverse expert panel representing a spectrum of disciplines; now they've released a followup, with ten clear recommendations for AI implementations in the public and private sector. Read the rest

:)