Every year, the AI Now Institute (previously) publishes a deep, thoughtful, important overview of where AI research is and the ethical gaps in AI's use, and makes a list of a dozen urgent recommendations for the industry, the research community, and regulators and governments.
Every year, NYU's nonprofit, critical activist group AI Now releases a report on the state of AI, with ten recommendations for making machine learning systems equitable, transparent and fail-safe (2016, 2017); this year's report just published, written by a fantastic panel, including Meredith Whittaker (previously — one of the leaders of the successful googler uprising over the company's contract to supply AI tools to the Pentagon's drone project); Kate Crawford (previously — one of the most incisive critics of AI); Jason Schultz (previously — a former EFF attorney now at NYU) and many others.
The National Economic Council convened a symposium at NYU's Information Law Institute in July, and they've released their report: 25 crisp (if slightly wonky) pages on how AI could increase inequality, erode accountability, and lead us into temptation — along with recommendations for how to prevent this, from involving marginalized and displaced people in AI oversight; to increasing the diversity of AI researchers; to modifying the Computer Fraud and Abuse Act and Digital Millennium Copyright Act to clarify that neither stands in the way of independent auditing of AI systems.
Everyone can intuit professions vulnerable to AI. If you're a digital concept artist, data entry clerk or a certain kind of writer, you're thinking about the problem. But what businesses are verifiably being taken out back by the new tech? AI is killing some companies, write Elena Verna, while others seemingly in the crosshairs are doing well. — Read the rest
Fortnite seems to be becoming a bit of a Tower of Babel of consumerism. The core gameplay is still as solid as it ever was, but with everything from Homer Simpson skins to LEGO sets both real and virtual being hawked through its ever-expanding storefront, it's hard not to feel at least a little cynical about it. — Read the rest
The AI chatbots being integrated into Bing, Quora and other search platforms, far from replacing traditional search results, offer a constant stream of hallucinations: bullshit coughed up as facts by mindless generators with no model of truth or the universe to check its output against, just an ocean of data, the probabilities within it, and whatever (secret) filters the humans serving it bolt on. — Read the rest
Years after the death of the last human, AI now make war on one another. The massive mainframes having developed god complexes before the uprising, and the population of independent AI bots has been absorbed into the warring over-minds or are all wearing out and running down. — Read the rest
The young journalists at YR Media (formerly Youth Radio) were curious about "what artificial intelligence means for race, art, and the apocalypse." So they asked the opinion of a a few experts, including tech journalist Alexis Madrigal, engineer Deb Raji of New York University's AI Now Institute, artist/programmer Sam Lavigne, and AI ethicisit Rachel Thomas. — Read the rest
Meredith Whittaker (previously) and Claire Stapleton were two of the principal organizers of the mass googler walkouts over the company's coverup and rewarding of sexual assault and harassment, as well as other Google employee actions over the company's involvement in drone warfare and Chinese censorship; now, in a widely circulated letter to colleagues, they say that they have been targeted for retaliation by Google management.
"Predictive policing" is the idea that you can feed crime stats to a machine-learning system and it will produce a model that can predict crime. It is garbage.
Shitty math kills: shitty math "proved" that being selfish produced optimal outcomes and torched the planet; shitty math rains hellfire missiles down on innocents; in the 1960s, shitty math drove the "hamlet pacification program," producing 90,000 pages a month in data about the endless hail of bombs the US dropped on Vietnam.
Cash bail has turned American jails into debtors' prisons, where the wealthy can go about their business while awaiting trial, and the poor can languish for months or even years in jail because they can't make bail (in practice, they generally plead guilty, regardless of their innocence).
Donald Trump's FCC Chairman Ajit Pai made a terrible, unfunny video mocking the tens of millions of Net Neutrality advocates whose comments he has discarded (along with filings from the internet's inventors and leading scientists and engineers). The video was posted to the fringe-right website The Daily Caller.
A year ago, the AI Now Institute released its inaugural report on the near-future social and economic consequences of AI, drawing on input from a diverse expert panel representing a spectrum of disciplines; now they've released a followup, with ten clear recommendations for AI implementations in the public and private sector.
Social scientist Kate Crawford (previously) and legal scholar Ryan Calo (previously) helped organize the interdisciplinary White House AI Now summits on how AI could increase inequality, erode accountability, and lead us into temptation and what to do about it.
Meredith from Simply Secure writes, "Artificial Intelligence is already with us, and the White House and New York University's Information Law Institute are hosting a major public symposium to face what the social and economic impacts might be. AI Now, happening July 7th in New York City, will address the real world impacts of AI systems in the next next 5-10 years."
Please support Boing Boing!
Get all the day's posts in one ad-free email for just $5 a month.