Dank Learning: teaching a machine learning algorithm to generate memes

A physics student and an engineering student from Stanford fed 400,000 memes to a Long Short-Term Memory Recurrent Neural Network and asked it to generate more memes of its own. Read the rest

The new Deep Fakes are in and they're spookily good

SIGGRAPH is coming, when all the amazeballs graphics research drops, and the previews are terrifying and astonishing by turns (sometimes both!). Read the rest

Machine learning may be most useful in tiny, embedded, offline processors

The tiny embedded processors in smart gadgets -- including much of the Internet of Shit -- are able to do a lot of sensing without exhausting their batteries, because sensing is cheap in terms of power consumption. Read the rest

Enhance enhance: Using machine learning to recover lost detail from upscaled photos

A team of researchers from Twitter have published a paper detailing a machine learning technique that uses a generative adversarial network to make shrewd guesses about how to up-res small images by up to 400%, into crisp, large images, with eye-popping results. Read the rest

See in the Dark: a machine learning technique for producing astoundingly sharp photos in very low light

A group of scientists from Intel and the University of Illinois at Urbana–Champaign have published a paper called Learning to See in the Dark detailing a powerful machine-learning based image processing technique that allows regular cameras to take super-sharp pictures in very low light, without long exposures or the kinds of graininess associated with low-light photography. Read the rest

Should I use an algorithm here? EFF's 5-point checklist

The Electronic Frontier Foundation's Jamie Williams and Lena Gunn have drawn up an annotated five-point list of questions to ask yourself before using a machine-learning algorithm to make predictions and guide outcomes. Read the rest

Machine learning has a reproducibility crisis

Machine learning is often characterized as much an "art" as a "science" and in at least one regard, that's true: its practitioners are prone to working under loosely controlled conditions, using training data that is being continuously tweaked with no versioning; modifying parameters during runs (because it takes too long to wait for the whole run before making changes); squashing bugs mid-run; these and other common practices mean that researchers often can't replicate their own results -- and virtually no one else can, either. Read the rest

Machine learning models keep getting spoofed by adversarial attacks and it's not clear if this can ever be fixed

Machine learning models use statistical analysis of historical data to predict future events: whether you are a good candidate for a loan, whether you will violate parole, or whether the thing in the road ahead is a stop sign or a moose. Read the rest