SLIM: An open, transparent, hand-computable sentencing algorithm

Machine learning companies are making big bucks selling opaque, secretive sentencing algorithm tools to America's court systems: the vendors of these systems claim that they are too sophisticated to explain, and use that opacity to dismiss critics who say the algorithms oversentence black and poor people. Read the rest

Robot wisdom from a deep learning system trained on ancient proverbs

Janelle Shane trained a recurrent neural network with a data-set of more than 2000 ancient proverbs and asked it to think up its own: "A fox smells it better than a fool’s for a day." Read the rest

The 2016 elections taught us to watch for attacks that undermine the legitimacy of elections

Princeton computer scientist and former White House Deputy CTO Ed Felten (previously) writes about the security lessons of the 2016 election: first, that other nation-states are more aggressive than generally supposed, and second, that you don't need to hack the vote-totals to effect devastation on an adversary -- it's sufficient to undermine the election's legitimacy by messing with voter rolls, "so there is uncertainty about whether the correct people were allowed to vote." Read the rest

A taxonomy of algorithmic accountability

Eminent computer scientist Ed Felten has posted a short, extremely useful taxonomy of four ways that an algorithm can fail to be accountable to the people whose lives it affects: it can be protected by claims of confidentiality ("how it works is a trade secret"); by complexity ("you wouldn't understand how it works"); unreasonableness ("we consider factors supported by data, even when you there's no obvious correlation"); and injustice ("it seems impossible to explain how the algorithm is consistent with law or ethics"). Read the rest

Translate between Charles Babbage's computing jargon and modern terminology

If you're intending to build an analytical engine with a six-sided prism to run Charles Babbage's weird cardboard vaporware program, you will need some help with Babbage's notes, as old Charles was inventing a whole technical vocab from scratch. Read the rest

Charles Babbage wrote a "cardboard vaporware" app in 1840 and left it in Turin

Bruce Sterling's been playing with a stack of hand-punched cardboard cards created in 1840 by Charles Babbage as a kind of vaporware app for his never-built Analytical Engine; they were intended to placed in a revolving "six-sided prism." Read the rest

AI has a legibility problem

I first encountered the idea of technological "legibility" in the work of Natalie Jeremijenko (previously) who insists on the importance of humans being able to know why a computer is doing what it does. Read the rest

The "anti-patterns" that turned the IoT into the Internet of Shit

Cloudflare presents a primer on "anti-patterns" that have transformed IoT devices into ghastly security nightmares. Read the rest

Train your AI with the world's largest data-set of sarcasm, courtesy of redditors' self-tagging

Redditors' convention of tagging their sarcastic remarks is a dream come true for machine learning researchers hoping to teach computers to recognize and/or generate sarcasm. Read the rest

Kevin Kelly: "superhuman" AI is bullshit

Kevin Kelly argues that the core premises that underlie the belief that artificial intelligence will overtake human intelligence are "more akin to a religious belief — a myth" than a scientific theory. Read the rest

Universal punchcard-based Turing machine implemented in Powerpoint animations

Tom Wildenhain developed a Turing-complete punchcard-driven universal machine that is embodied entirely in Powerpoint Animations and can execute any arbitrary code (albeit very slowly) and presented it at CMU's SIGBOVIK 2017 conference to great hilarity. Read the rest

Machine learning manipulates photos to change the subject's gaze

A poster talk at last year's European Conference on Computer Vision introduced "Deep Warp," a machine-learning based technique for "photorealistic image resynthesis for gaze manipulation" -- that is, you hand the algorithm an image of a face, and tell it where you want the person in the face to look, and it moves the gaze realistically to have the person look in your desired location. Read the rest

Self-study materials on the fundamentals of malware analysis

Amanda Rousseau's self-learning materials for her Malware Unicorn workshop are a fantastic introduction to understanding and analyzing malware, covering the techniques used by malware authors, reverse-engineering tools, and three kinds of analysis: triage, static and dynamic. Read the rest

The "universal adversarial preturbation" undetectably alters images so AI can't recognize them

In a newly revised paper in Computer Vision and Pattern Recognition, a group of French and Swiss computer science researchers show that "a very small perturbation vector that causes natural images to be misclassified with high probability" -- that is, a minor image transformation can beat machine learning systems nearly every time. Read the rest

A new (slow) open source JPEG algorithm makes images 35% smaller and looks better than older compression systems

Guetzli is Google's new free/open JPEG compression algorithm, which produces images that are more than a third smaller in terms of byte-size, and the resulting images are consistently rated as more attractive than traditionally compressed JPEGs. It's something of a web holy grail: much smaller, better-looking files without having to convince people to install a plugin or browser makers to support a new file-format. Read the rest

Mathematics for Computer Science: a free, CC-licensed MIT textbook

This is indeed an up-to-the-minute text [PDF], dated Mar 7, 2017. It's written by Googler/MIT prof Eric Lehman, MIT/Akamai scientist F Thomson Leighton and MIT AI researcher Albert R Meyer, as a companion to their Mathematics for Computer Science open course. (via 4 Short Links) Read the rest

Deflationary Intelligence: in 2017, everything is "AI"

Ian Bogost (previously) describes the "deflationary" use of "artificial intelligence" to describe the most trivial computer science innovations and software-enabled products, from Facebook's suicide detection "AI" (a trivial word-search program that alerts humans) to the chatbots that are billed as steps away from passing a Turing test, but which are little more than glorified phone trees, and on whom 40% of humans give up after a single conversational volley. Read the rest

More posts