The third annual AI Now report: 10 more ways to make AI safe for human flourishing

Every year, NYU's nonprofit, critical activist group AI Now releases a report on the state of AI, with ten recommendations for making machine learning systems equitable, transparent and fail-safe (2016, 2017); this year's report just published, written by a fantastic panel, including Meredith Whittaker (previously — one of the leaders of the successful googler uprising over the company's contract to supply AI tools to the Pentagon's drone project); Kate Crawford (previously — one of the most incisive critics of AI); Jason Schultz (previously — a former EFF attorney now at NYU) and many others.

The AI Now Report: social/economic implications of near-future AI

The National Economic Council convened a symposium at NYU's Information Law Institute in July, and they've released their report: 25 crisp (if slightly wonky) pages on how AI could increase inequality, erode accountability, and lead us into temptation — along with recommendations for how to prevent this, from involving marginalized and displaced people in AI oversight; to increasing the diversity of AI researchers; to modifying the Computer Fraud and Abuse Act and Digital Millennium Copyright Act to clarify that neither stands in the way of independent auditing of AI systems.

AI search chatbots output lies, nonsense and hallucinations

image: Mopic/Shutterstock

The AI chatbots being integrated into Bing, Quora and other search platforms, far from replacing traditional search results, offer a constant stream of hallucinations: bullshit coughed up as facts by mindless generators with no model of truth or the universe to check its output against, just an ocean of data, the probabilities within it, and whatever (secret) filters the humans serving it bolt on. — Read the rest

Many of the key Googler Uprising organizers have quit, citing retaliation from senior management

The Googler Uprising was a string of employee actions within Google over a series of issues related to ethics and business practices, starting with the company's AI project for US military drones, then its secretive work on a censored/surveilling search tool for use in China; then the $80m payout to Android founder Andy Rubin after he was accused of multiple sexual assaults.

Google walkout organizers say they're being retaliated against for demanding ethical standards

Meredith Whittaker (previously) and Claire Stapleton were two of the principal organizers of the mass googler walkouts over the company's coverup and rewarding of sexual assault and harassment, as well as other Google employee actions over the company's involvement in drone warfare and Chinese censorship; now, in a widely circulated letter to colleagues, they say that they have been targeted for retaliation by Google management.

Ten recommendations to make AI safe for humanity

A year ago, the AI Now Institute released its inaugural report on the near-future social and economic consequences of AI, drawing on input from a diverse expert panel representing a spectrum of disciplines; now they've released a followup, with ten clear recommendations for AI implementations in the public and private sector.

White House contends with AI's big social challenges, July 7/NYC

Meredith from Simply Secure writes, "Artificial Intelligence is already with us, and the White House and New York University's Information Law Institute are hosting a major public symposium to face what the social and economic impacts might be. AI Now, happening July 7th in New York City, will address the real world impacts of AI systems in the next next 5-10 years."