Using machine-learning to auto-gen scary faces and hauntify buildings


The Nightmare Machine is an MIT project to use machine learning image-processing to make imagery for Hallowe'en. Read the rest

Using Machine Learning to synthesize images that look NSFW but aren't


Yahoo has released a machine-learning model called open_nsfw that is designed to distinguish not-safe-for-work images from worksafe ones. By tweaking the model and combining it with places-CNN, MIT's scene-recognition model, Gabriel Goh created a bunch of machine-generated scenes that score high for both models -- things that aren't porn, but look porny. Read the rest

Projection mapping on a moving surface with a high-speed projector


Projection mapping is one of the most profound visual effects that computers can generate; themepark fans will have seen it in effect on the revamped opening scene to the Indiana Jones ride at Disneyland and in the night-time shows that involve painting the whole castle with light (projection mapping is also used to generate the rear-projected faces of the animatronic figures in the new Snow White ride). Read the rest

Wrong things that programmers believe, a curated list


Kevin Deldycke has collected a "curated list" of "awesome falsehoods programmers believe in," sorted by subject into meta, business, dates and time, emails, geography, human identity, networks, phone numbers, postal addresses and software engineering. Read the rest

AI's blind spot: garbage in, garbage out


Social scientist Kate Crawford (previously) and legal scholar Ryan Calo (previously) helped organize the interdisciplinary White House AI Now summits on how AI could increase inequality, erode accountability, and lead us into temptation and what to do about it. Read the rest

Computer-mining poetry from the New York Times's obituary headlines


The standard format for a New York Times lead obit headline goes NAME, AGE, Dies; STATEMENT OF ACCOMPLISHMENT (e.g. "Suzanne Mitchell, 73, Dies; Made Dallas Cowboys Cheerleaders a Global Brand. Read the rest

Messy: When automated anti-disaster systems make things worse, and what to do about it


"Undercover Economist" Tim Harford (previously) has a new book out, Messy, which makes a fascinating and compelling case that we are in real danger from the seductive neatness of computers, which put our messes out of sight, where they grow into great catastrophes. Read the rest

The malware that's pwning the Internet of Things is terrifyingly amateurish


Following the release of the sourcecode for the Mirai botnet, which was used to harness DVRs, surveillance cameras and other Internet of Things things into one of the most powerful denial-of-service attacks the internet has ever seen, analysts have gone over its sourcecode and found that the devastatingly effective malware was strictly amateur-hour, a stark commentary on the even worse security in the millions and millions of IoT devices we've welcomed into our homes. Read the rest

Machine-learning photo-editor predicts what should be under your brush


In Neural Photo Editing With Introspective Adversarial Networks, a group of University of Edinburgh engineers and a private research colleague describe a method for using "introspective adversarial networks" to edit images in realtime, which they demonstrate in an open project called "Neural Photo Editor" that "enhances" photos by predicting what should be under your brush. Read the rest

The AI Now Report: social/economic implications of near-future AI


The National Economic Council convened a symposium at NYU's Information Law Institute in July, and they've released their report: 25 crisp (if slightly wonky) pages on how AI could increase inequality, erode accountability, and lead us into temptation -- along with recommendations for how to prevent this, from involving marginalized and displaced people in AI oversight; to increasing the diversity of AI researchers; to modifying the Computer Fraud and Abuse Act and Digital Millennium Copyright Act to clarify that neither stands in the way of independent auditing of AI systems. Read the rest

Jigsaw: "wildly ambitious" Google spin-out aimed at tackling "surveillance, extremist indoctrination, censorship"

050 056c026d-1c66-4d42-9fae-a8e96df290c5-1020x1173

Technologists have a dismal pattern: when it comes to engineering challenges ("build a global-scale comms platform") they rub their hands together with excitement; when it comes to the social challenges implied by the engineering ones ("do something about trolls") they throw their hands up and declare the problem to be too hard to solve. Read the rest

Rules for trusting "black boxes" in algorithmic control systems


Tim O'Reilly writes about the reality that more and more of our lives -- including whether you end up seeing this very sentence! -- is in the hands of "black boxes": algorithmic decision-makers whose inner workings are a secret from the people they affect. Read the rest

Interview with teen botmaster whose lawyerbots are saving people millions


Joshua Browder, the teenaged botmaster whose Do Not Pay bot is helping drivers save millions by challenging NYC and London parking tickets and assisting UK homeless people who are applying for benefits, sat down for a chat on the O'Reilly Bots Podcast (MP3). Read the rest

Machine learning system can descramble pixelated/blurred redactions 83% of the time

050 056c026d-1c66-4d42-9fae-a8e96df290c5-1020x1188

A joint UT Austin/Cornell team has taught a machine learning system based on the free/open Torch library to correctly guess the content of pixellated or blurred redactions with high accuracy: for masked faces that humans correctly guess 0.19% of the time, the system can make a correct guess 83% of the time, when given five tries. Read the rest

We need a new science of measurement to figure out when AI is working and why it fails


Writing on Medium, AI researcher Kate Crawford (previously) and Simply Secure (previously) co-founder Meredith Whittaker make the case for a new scholarly discipline that "measures and assesses the social and economic effects of current AI systems." Read the rest

Blackballed by machine learning: how algorithms can destroy your chances of getting a job


The Guardian's published a long excerpt from Cathy O'Neil's essential new book, Weapons of Math Destruction, in which O'Neil describes the way that shoddy machine-learning companies have come to dominate waged employment hiring, selling their dubious products to giant companies that use them to decide who can and can't work. Read the rest

Sampling bias: how a machine-learning beauty contest awarded nearly all prizes to whites

050 056c026d-1c66-4d42-9fae-a8e96df290c5-1020x1176

If you've read Cathy O'Neil's Weapons of Math Destruction (you should, right NOW), then you know that machine learning can be a way to apply a deadly, nearly irrefutable veneer of objectivity to our worst, most biased practices. Read the rest

More posts