"kate crawford"

The third annual AI Now report: 10 more ways to make AI safe for human flourishing

Every year, NYU's nonprofit, critical activist group AI Now releases a report on the state of AI, with ten recommendations for making machine learning systems equitable, transparent and fail-safe (2016, 2017); this year's report just published, written by a fantastic panel, including Meredith Whittaker (previously -- one of the leaders of the successful googler uprising over the company's contract to supply AI tools to the Pentagon's drone project); Kate Crawford (previously -- one of the most incisive critics of AI); Jason Schultz (previously -- a former EFF attorney now at NYU) and many others. Read the rest

AI's blind spot: garbage in, garbage out

Social scientist Kate Crawford (previously) and legal scholar Ryan Calo (previously) helped organize the interdisciplinary White House AI Now summits on how AI could increase inequality, erode accountability, and lead us into temptation and what to do about it. Read the rest

We need a new science of measurement to figure out when AI is working and why it fails

Writing on Medium, AI researcher Kate Crawford (previously) and Simply Secure (previously) co-founder Meredith Whittaker make the case for a new scholarly discipline that "measures and assesses the social and economic effects of current AI systems." Read the rest

Forget Skynet: AI is already making things terrible for people who aren't rich white dudes

Kate Crawford (previously) takes to the New York Times's editorial page to ask why rich white guys act like the big risk of machine-learning systems is that they'll evolve into Skynet-like apex-predators that subjugate the human race, when there are already rampant problems with machine learning: algorithmic racist sentencing, algorithmic, racist and sexist discrimination, algorithmic harassment, algorithmic hiring bias, algorithmic terrorist watchlisting, algorithmic racist policing, and a host of other algorithmic cruelties and nonsense, each one imbued with unassailable objectivity thanks to its mathematical underpinnings. Read the rest

White House contends with AI's big social challenges, July 7/NYC

Meredith from Simply Secure writes, "Artificial Intelligence is already with us, and the White House and New York University’s Information Law Institute are hosting a major public symposium to face what the social and economic impacts might be. AI Now, happening July 7th in New York City, will address the real world impacts of AI systems in the next next 5-10 years." Read the rest

Laura Poitras's Astro Noise: indispensable book and gallery show about mass surveillance

Laura Poitras is the Macarthur-winning, Oscar-winning documentarian who made Citizenfour. Her life has been dogged by government surveillance and harassment, and she has had to become a paranoid OPSEC ninja just to survive. Read the rest

Exclusive: Snowden intelligence docs reveal UK spooks' malware checklist

Boing Boing is proud to publish two original documents disclosed by Edward Snowden, in connection with "Sherlock Holmes and the Adventure of the Extraordinary Rendition," a short story written for Laura Poitras's Astro Noise exhibition, which runs at NYC's Whitney Museum of American Art from Feb 5 to May 1, 2016.

Do Not Track: revolutionary mashup documentary about Web privacy

Brett "Remix Manifesto" Gaylor tells the story of his new project: a revolutionary "mashup documentary" about privacy and the Web.

Inherent biases warp Big Data

The theory of Big Data is that the numbers have an objective property that makes their revealed truth especially valuable; but as Kate Crawford points out, Big Data has inherent, lurking bias, because the datasets are the creation of fallible, biased humans. For example, the data-points on how people reacted to Hurricane Sandy mostly emanate from Manhattan, because that's where the highest concentration of people wealthy enough to own tweeting, data-emanating smartphones are. But more severely affected locations -- Breezy Point, Coney Island and Rockaway -- produced almost no data because they had fewer smartphones per capita, and the ones they had didn't work because their power and cellular networks failed first.

I wrote about this in 2012, when Google switched strategies for describing the way it arrived at its search-ranking. Prior to that, the company had described its ranking process as a mathematical one and told people who didn't like how they got ranked that the problem was their own, because the numbers didn't lie. After governments took this argument to heart and started ordering Google to change its search results -- on the grounds that there's no free speech question if you're just ordering post-processing on the outcome of an equation -- Google started commissioning law review articles explaining that the algorithms that determined search-rank were the outcome of an expressive, human, editorial process that deserved free speech protection. Read the rest

Internal audit shows NSA often breaks privacy rules, made thousands of violations a year

The Washington Post today published several big scoops related to the National Security Agency's surveillance programs. The paper's investigations were triggered by documents leaked to them "earlier this summer" by former NSA contractor Edward Snowden. He has sought political asylum from a number of nations, and is currently in Moscow. The U.S. wants to charge him with espionage for his revelations.

Barton Gellman writes about an internal NSA audit document which shows that since Congress granted the agency broad new powers in 2008, it has broken privacy rules thousands of times per year--and sometimes because of typos.

Read the rest

:)