IBM developed NYPD surveillance tools that let cops pick targets based on skin color

The NYPD's secretive Lower Manhattan Security Coordination Center uses software from IBM in its video analytics system, which allows cops to automatically scan surveillance footage for machine-generated labels that identify clothing and other identifying classifiers. Read the rest

Here's the funniest, most scathing, most informative and most useful talk on AI and security

James Mickens (previously) has a well-deserved reputation for being the information security world's funniest speaker, and if that were all he did, he would still be worth listening to. Read the rest

A machine learning system trained on scholarly journals could correct Wikipedia's gendered under-representation problem

Quicksilver is a machine-learning tool from AI startup Primer: it used 30,000 Wikipedia entries to create a model that allowed it to identify the characteristics that make a scientist noteworthy enough for encyclopedic inclusion; then it mined the academic search-engine Semantic Scholar to identify the 200,000 scholars in a variety of fields; now it is systematically composing draft Wikipedia entries for scholars on its list who are missing from the encyclopedia. Read the rest

There's something eerie about bots that teach themselves to cheat

One of the holy grails of computer science is unsupervised machine learning, where you tell an algorithm what goal you want it to attain, and give it some data to practice on, and the algorithm uses statistics to invent surprising ways of solving your problem. Read the rest

Dank Learning: teaching a machine learning algorithm to generate memes

A physics student and an engineering student from Stanford fed 400,000 memes to a Long Short-Term Memory Recurrent Neural Network and asked it to generate more memes of its own. Read the rest

The new Deep Fakes are in and they're spookily good

SIGGRAPH is coming, when all the amazeballs graphics research drops, and the previews are terrifying and astonishing by turns (sometimes both!). Read the rest

Machine learning may be most useful in tiny, embedded, offline processors

The tiny embedded processors in smart gadgets -- including much of the Internet of Shit -- are able to do a lot of sensing without exhausting their batteries, because sensing is cheap in terms of power consumption. Read the rest

Enhance enhance: Using machine learning to recover lost detail from upscaled photos

A team of researchers from Twitter have published a paper detailing a machine learning technique that uses a generative adversarial network to make shrewd guesses about how to up-res small images by up to 400%, into crisp, large images, with eye-popping results. Read the rest

See in the Dark: a machine learning technique for producing astoundingly sharp photos in very low light

A group of scientists from Intel and the University of Illinois at Urbana–Champaign have published a paper called Learning to See in the Dark detailing a powerful machine-learning based image processing technique that allows regular cameras to take super-sharp pictures in very low light, without long exposures or the kinds of graininess associated with low-light photography. Read the rest

Should I use an algorithm here? EFF's 5-point checklist

The Electronic Frontier Foundation's Jamie Williams and Lena Gunn have drawn up an annotated five-point list of questions to ask yourself before using a machine-learning algorithm to make predictions and guide outcomes. Read the rest

Machine learning has a reproducibility crisis

Machine learning is often characterized as much an "art" as a "science" and in at least one regard, that's true: its practitioners are prone to working under loosely controlled conditions, using training data that is being continuously tweaked with no versioning; modifying parameters during runs (because it takes too long to wait for the whole run before making changes); squashing bugs mid-run; these and other common practices mean that researchers often can't replicate their own results -- and virtually no one else can, either. Read the rest

Machine learning models keep getting spoofed by adversarial attacks and it's not clear if this can ever be fixed

Machine learning models use statistical analysis of historical data to predict future events: whether you are a good candidate for a loan, whether you will violate parole, or whether the thing in the road ahead is a stop sign or a moose. Read the rest