Beyond GIGO: how "predictive policing" launders racism, corruption and bias to make them seem empirical

"Predictive policing" is the idea that you can feed crime stats to a machine-learning system and it will produce a model that can predict crime. It is garbage. Read the rest

Chasing down that list of potential Predpol customers reveals dozens of cities that have secretly experimented with "predictive policing"

Last October, I published a list of cities that appeared to have contracted with Predpol, a "predictive policing" company, based on research provided to me by an anonymous source who used clever methods to uncover the customer list. Read the rest

American prisoners coerced or tricked into providing voice-prints for use in eternal, secret, unchecked surveillance

American prisoners are being forced -- on pain of losing access to the prison phone system -- to provide training data for a voice-print recognition algorithm that private contractors are building for biometric surveillance system that listens in on prisoners' calls. Read the rest

The Safe Face Pledge: an ethical code of conduct for facial recognition scientists and developers

The Safe Face Pledge launched last month as a "pledge to mitigate abuse of facial analysis technology," with four themes: "Show Value for Human Life, Dignity, and Rights;" "Address Harmful Bias"; "Facilitate Transparency"; and "Embed Commitments into Business Practices" (SAFE). Read the rest

Building a high-performance cluster of Gameboy emulators to teach computers to play video games

Kamil Rocki was inspired by the 2016 paper from Google Deepmind researchers explaining how they used machine learning to develop a system that could play Breakout on the Atari 2600 with superhuman proficiency. Read the rest

The third annual AI Now report: 10 more ways to make AI safe for human flourishing

Every year, NYU's nonprofit, critical activist group AI Now releases a report on the state of AI, with ten recommendations for making machine learning systems equitable, transparent and fail-safe (2016, 2017); this year's report just published, written by a fantastic panel, including Meredith Whittaker (previously -- one of the leaders of the successful googler uprising over the company's contract to supply AI tools to the Pentagon's drone project); Kate Crawford (previously -- one of the most incisive critics of AI); Jason Schultz (previously -- a former EFF attorney now at NYU) and many others. Read the rest

Breed weird critters with machine learning and Ganbreeder

Ganbreeder uses a machine learning technique called Generative Adversarial Networks (GANs) to generate images that seem like photos, at least a first glance. Read the rest

Babysitter vetting and voice-analysis: Have we reached peak AI snakeoil?

The ever-useful Gartner Hype Cycle identified an inflection point in the life of any new technology: the "Peak of Inflated Expectations," attained just before the sharp dropoff into the "Trough of Disillusionment"; I've lived through the hype-cycles of several kinds of technology and one iron-clad correlate of the "Peak of Inflated Expectations" is the "Peak of Huckster Snakeoil Salesmen": the moment at which con-artists just add a tech buzzword to some crooked scam and head out into the market to net a fortune before everyone gets wise to the idea that the shiny new hypefodder isn't a magic bullet. Read the rest

Chinese AI traffic cam mistook a bus ad for a human and publicly shamed the CEO it depicted for jaywalking

China's war on jaywalking went to the next level last spring when AI-based facial recognition systems were integrated into some crosswalks, to punish jaywalkers by squirting them with water, sending them texts warning them about legal consequences of jaywalking, and/or publicly shaming them by displaying their pictures and names on large digital billboards. Read the rest

Machine learning hucksters invent an AI racist uncle that predicts your personality from your facial bone structure

Phrenology (the fake science of predicting personality from the shape of your cranial bones) is like Freddy Kruger, an unkillable demon who rises from the grave every time some desperate huckster decides they need to make a few extra bucks. Read the rest

What's missing from machine learning research: an East African perspective

CIT computer scientist Milan Cvitkovic conducted 46 in-depth interviews with "scientists, engineers, and CEOs" and collated their machine learning research needs into an aptly named paper entitled "Some Requests for Machine Learning Research from the East African Tech Scene," which presents an illuminating look into the gaps in the current practice of machine learning, itself an example of how rich-world priorities shape our ability to understand, compute and predict the world. Read the rest

Generative adversarial network produces a "universal fingerprint" that will unlock many smartphones

Researchers at NYU and U Michigan have published a paper explaining how they used a pair of machine-learning systems to develop a "universal fingerprint" that can fool the lowest-security fingerprint sensors 76% of the time (it is less effective against higher-security sensors). Read the rest

Common sense: the Chomsky/Piaget debates come to AI

In 1975, Noam Chomsky and Jean Paiget held a historic debate about the nature of human cognition; Chomsky held that babies are born with a bunch of in-built rules and instincts that help them build up the knowledge that they need to navigate the world; Piaget argued that babies are effectively blank slates that acquire knowledge from experiencing the world (including the knowledge that there is a thing called "experience" and "the world"). Read the rest

A catalog of ingenious cheats developed by machine-learning systems

When you train a machine learning system, you give it a bunch of data -- a simulation, a dataset, etc -- and it uses statistical methods to find a way to solve some task: land a virtual airplane, recognize a face, match a block of text with a known author, etc. Read the rest

Researchers claim to have permanently neutralized ad-blocking's most promising weapons

Last year, Princeton researchers revealed a powerful new ad-blocking technique: perceptual ad-blocking uses a machine-learning model trained on images of pages with the ads identified to make predictions about which page elements are ads to block and which parts are not. Read the rest

Using machine learning to teach robots to get dressed

In the Siggraph 2018 paper Learning to Dress: Synthesizing Human Dressing Motion via Deep Reinforcement Learning, a Georgia Institute of Technology/Google Brain research team describe how they taught body-shame to an AI, leaving it with an unstoppable compulsion to clothe itself before the frowning mien of God. Read the rest

DeOldify: a free/open photo-retoucher based on machine learning

Jason Antic's DeOldify is a Self-Attention Generative Adversarial Network-based machine learning system that colorizes and restores old images. It's only in the early stages but it's already producing really impressive results, and the pipeline includes a "defade" model that is "just training the same model to reconstruct images that augmented with ridiculous contrast/brightness adjustments, as a simulation of fading photos and photos taken with old/bad equipment." Read the rest

More posts