Using Machine Learning to synthesize images that look NSFW but aren't


Yahoo has released a machine-learning model called open_nsfw that is designed to distinguish not-safe-for-work images from worksafe ones. By tweaking the model and combining it with places-CNN, MIT's scene-recognition model, Gabriel Goh created a bunch of machine-generated scenes that score high for both models -- things that aren't porn, but look porny. Read the rest

AI's blind spot: garbage in, garbage out


Social scientist Kate Crawford (previously) and legal scholar Ryan Calo (previously) helped organize the interdisciplinary White House AI Now summits on how AI could increase inequality, erode accountability, and lead us into temptation and what to do about it. Read the rest

Joi Ito interviews Barack Obama for Wired: machine learning, neurodiversity, basic research and Star Trek


Joi Ito (previously) -- director of MIT Media Lab, former Creative Commons chief, investor, entrepreneur, and happy mutant -- interviewed Barack Obama for a special, Obama-edited issue of Wired. Read the rest

Alt-Right trolls argue for hours with Twitter bot


The internet's army of enraged anime avatars has a new enemy beyond their comprehension: a Twitter bot created by writer and activist Sarah Nyberg to make fools of them. Some lose themselves to hours of interaction, unaware they are ranting at a computer program. Read the rest

Though crime happens everywhere, predictive policing tools send cops to poor/black neighborhoods


Researchers from the Human Rights Data Analysis Group (previously) reimplemented the algorithm Predpol predictive policing system that police departments around America have spent a fortune on in order to find out where to set their patrols, and fed it Oakland's 2010 arrest data, then asked it to predict where the crime would be in 2011. Read the rest

Automated transcription of a fax machine that called the wrong number


E.J. Brennan reported that a fax machine called his voice number. Twilio, his phone provider, provided a useful transcription of the message it left him.

It's an old tweet (from 2013) but it checks out. [via Internet of Shit] Read the rest

An ethnographic interview with an AI


Tech anthropologist Genevieve Bell (previously) delivered one of the keynotes at last week's O'Reilly AI conference in New York City, describing how you could do anthropology fieldwork on an AI -- specifically, how you could do an ethnographic interview with one. Read the rest

Solo Radio uses AI to match songs to your facial expression


At this week's London Design Festival, design firm Uniform displayed Solo Radio. Stand in front of the device and it scans your face for input into software that assesses your emotions. Then it plays a song via Spotify algorithms with the appropriate mood. Read the rest

Google, Facebook, Amazon, IBM, and Microsoft join forces to protect humans from robots


A group of some of the most powerful technology companies on the planet have formed a partnership on artificial intelligence.

Read the rest

Machine-learning photo-editor predicts what should be under your brush


In Neural Photo Editing With Introspective Adversarial Networks, a group of University of Edinburgh engineers and a private research colleague describe a method for using "introspective adversarial networks" to edit images in realtime, which they demonstrate in an open project called "Neural Photo Editor" that "enhances" photos by predicting what should be under your brush. Read the rest

The AI Now Report: social/economic implications of near-future AI


The National Economic Council convened a symposium at NYU's Information Law Institute in July, and they've released their report: 25 crisp (if slightly wonky) pages on how AI could increase inequality, erode accountability, and lead us into temptation -- along with recommendations for how to prevent this, from involving marginalized and displaced people in AI oversight; to increasing the diversity of AI researchers; to modifying the Computer Fraud and Abuse Act and Digital Millennium Copyright Act to clarify that neither stands in the way of independent auditing of AI systems. Read the rest

US endorses self-driving cars, with a catch: Feds want to control tech approval, not states

Lexus SUV Google prototype autonomous vehicle in Mountain View, 2015.  REUTERS

Federal auto safety regulators today said that self-driving cars “will save time, money and lives,” but also sent a clear signal that they want the power to inspect and approve technology before it hits the highways, rather than each U.S. state setting its own safety standards.

U.S. Transportation Secretary Anthony Foxx said on a press call today that a new federal premarket approval system "would require a lot more upfront discussion, dialogue and staffing on our part."

The government's statement today is big news for Uber, Google, Apple, and other Silicon Valley firms pouring millions of R&D dollars into figuring out how to swap human drivers for smart machines, or at least allow us to share control in “semiautonomous” setups.

Read the rest

Jigsaw: "wildly ambitious" Google spin-out aimed at tackling "surveillance, extremist indoctrination, censorship"

050 056c026d-1c66-4d42-9fae-a8e96df290c5-1020x1173

Technologists have a dismal pattern: when it comes to engineering challenges ("build a global-scale comms platform") they rub their hands together with excitement; when it comes to the social challenges implied by the engineering ones ("do something about trolls") they throw their hands up and declare the problem to be too hard to solve. Read the rest

Rules for trusting "black boxes" in algorithmic control systems


Tim O'Reilly writes about the reality that more and more of our lives -- including whether you end up seeing this very sentence! -- is in the hands of "black boxes": algorithmic decision-makers whose inner workings are a secret from the people they affect. Read the rest

Machine learning system can descramble pixelated/blurred redactions 83% of the time

050 056c026d-1c66-4d42-9fae-a8e96df290c5-1020x1188

A joint UT Austin/Cornell team has taught a machine learning system based on the free/open Torch library to correctly guess the content of pixellated or blurred redactions with high accuracy: for masked faces that humans correctly guess 0.19% of the time, the system can make a correct guess 83% of the time, when given five tries. Read the rest

We need a new science of measurement to figure out when AI is working and why it fails


Writing on Medium, AI researcher Kate Crawford (previously) and Simply Secure (previously) co-founder Meredith Whittaker make the case for a new scholarly discipline that "measures and assesses the social and economic effects of current AI systems." Read the rest

Deep dreaming at the grocery store

animation (3)

Take one trip to the grocers, add a couple passes through Google's psychedelic Deep Dreaming AI-based apophenia algorithm, and voila! You've materialized the extraordinary latent in the mundane. (via Beyond the Beyond) Read the rest

More posts