"AI now"

Many of the key Googler Uprising organizers have quit, citing retaliation from senior management

The Googler Uprising was a string of employee actions within Google over a series of issues related to ethics and business practices, starting with the company's AI project for US military drones, then its secretive work on a censored/surveilling search tool for use in China; then the $80m payout to Android founder Andy Rubin after he was accused of multiple sexual assaults. Read the rest

Google walkout organizers say they're being retaliated against for demanding ethical standards

Meredith Whittaker (previously) and Claire Stapleton were two of the principal organizers of the mass googler walkouts over the company's coverup and rewarding of sexual assault and harassment, as well as other Google employee actions over the company's involvement in drone warfare and Chinese censorship; now, in a widely circulated letter to colleagues, they say that they have been targeted for retaliation by Google management. Read the rest

Beyond GIGO: how "predictive policing" launders racism, corruption and bias to make them seem empirical

"Predictive policing" is the idea that you can feed crime stats to a machine-learning system and it will produce a model that can predict crime. It is garbage. Read the rest

The third annual AI Now report: 10 more ways to make AI safe for human flourishing

Every year, NYU's nonprofit, critical activist group AI Now releases a report on the state of AI, with ten recommendations for making machine learning systems equitable, transparent and fail-safe (2016, 2017); this year's report just published, written by a fantastic panel, including Meredith Whittaker (previously -- one of the leaders of the successful googler uprising over the company's contract to supply AI tools to the Pentagon's drone project); Kate Crawford (previously -- one of the most incisive critics of AI); Jason Schultz (previously -- a former EFF attorney now at NYU) and many others. Read the rest

What would a "counterculture of AI" look like?

Shitty math kills: shitty math "proved" that being selfish produced optimal outcomes and torched the planet; shitty math rains hellfire missiles down on innocents; in the 1960s, shitty math drove the "hamlet pacification program," producing 90,000 pages a month in data about the endless hail of bombs the US dropped on Vietnam. Read the rest

Good news: California ends cash bail! Bad news: It's been replaced with a black-box algorithm.

Cash bail has turned American jails into debtors' prisons, where the wealthy can go about their business while awaiting trial, and the poor can languish for months or even years in jail because they can't make bail (in practice, they generally plead guilty, regardless of their innocence). Read the rest

Make tough decisions easier with a decision matrix

You decided to read this article on a whim, and I'm glad you did. Not all decisions require a decision matrix. I'm an author, so I use decision matrices to methodically decide what's going to happen in a story. After completing a decision matrix, I'm satisfied with my choice and I can concentrate on writing about the option I selected.

In preparing to describe how I use decision matrices to make decisions, I noticed that Boing Boing's primary reference to decision matrices is highlighting another website's flowchart. It's a great flowchart. It's just not a decision matrix.

Decision matrices start with a matrix, in this context defined as a rectangular array of elements. A blank spreadsheet is fine. If you abhor spreadsheets, use any other medium you can make rows and columns with, but keep in mind that the decision at the end involves some elementary math. This is an example of customizations I'll suggest while I'm explaining decision matrices, to make the technique work well for you.

Now that you have an empty matrix, define the problem you want to solve. If you're into the 5 why's  technique, or if you've never tried it before, this is a good place to use it. If you'd rather meditate or walk or shower to let your subconscious define the problem, that may work better for you. A decision matrix is a tool, so make sure you're applying it to the question which will get you the results you want.

A simple question which a decision matrix can answer is "What should I write about?" List every possible option in the first column. Read the rest

Ten recommendations to make AI safe for humanity

A year ago, the AI Now Institute released its inaugural report on the near-future social and economic consequences of AI, drawing on input from a diverse expert panel representing a spectrum of disciplines; now they've released a followup, with ten clear recommendations for AI implementations in the public and private sector. Read the rest

AI's blind spot: garbage in, garbage out

Social scientist Kate Crawford (previously) and legal scholar Ryan Calo (previously) helped organize the interdisciplinary White House AI Now summits on how AI could increase inequality, erode accountability, and lead us into temptation and what to do about it. Read the rest

The AI Now Report: social/economic implications of near-future AI

The National Economic Council convened a symposium at NYU's Information Law Institute in July, and they've released their report: 25 crisp (if slightly wonky) pages on how AI could increase inequality, erode accountability, and lead us into temptation -- along with recommendations for how to prevent this, from involving marginalized and displaced people in AI oversight; to increasing the diversity of AI researchers; to modifying the Computer Fraud and Abuse Act and Digital Millennium Copyright Act to clarify that neither stands in the way of independent auditing of AI systems. Read the rest

White House contends with AI's big social challenges, July 7/NYC

Meredith from Simply Secure writes, "Artificial Intelligence is already with us, and the White House and New York University’s Information Law Institute are hosting a major public symposium to face what the social and economic impacts might be. AI Now, happening July 7th in New York City, will address the real world impacts of AI systems in the next next 5-10 years." Read the rest

:)