Chicago PD's predictive policing tool has been shut down after 8 years of catastrophically bad results

In 2012, Chicago PD collaborated with the RAND Corporation and the Illinois Institute of Technology to automatically generate "risk scores" for people they arrested, which were supposed to predict the likelihood that the person would be a "party to violence" in the future (this program was called "TRAP" -- Targeted Repeat-Offender Apprehension Program" -- seemingly without a shred of irony). Now, that program has been shut down, and the City of Chicago's Office of the Inspector General has published a damning report on its eight-year reign, revealing the ways in which the program discriminated against the people ensnared in it, without reducing violent crime. Read the rest

"Edge AI": encapsulating machine learning classifiers in lightweight, energy-efficient, airgapped chips

Writing in Wired, Boing Boing contributor Clive Thompson discusses the rise and rise of "Edge AI" startups that sell lightweight machine-learning classifiers that run on low-powered chips and don't talk to the cloud, meaning that they are privacy respecting and energy efficient. Read the rest

The bubbles in VR, cryptocurrency and machine learning are all part of the parallel computing bubble

Yesterday's column by John Naughton in the Observer revisited Nathan Myhrvold's 1997 prediction that when Moore's Law runs out -- that is, when processors stop doubling in speed every 18 months through an unbroken string of fundamental breakthroughs -- that programmers would have to return to the old disciplines of writing incredibly efficient code whose main consideration was the limits of the computer that runs on it. Read the rest

Wireheading: when machine learning systems jolt their reward centers by cheating

Machine learning systems are notorious for cheating, and there's a whole menagerie of ways that these systems achieve their notional goals while subverting their own purpose, with names like "model stealing, rewarding hacking and poisoning attacks." Read the rest

A vast network of shadowy news sites promote conservative talking points mixed with floods of algorithmically generated "news"

Columbia Journalism School's Tow Center has published the results of a longrunning investigation into a network of shadowy "local news" sites whose bulk content is composed of algorithmically generated stories about subjects like local rates of gas tax, or which bridges are most structurally sound. Read the rest

A Wechat-based "mobile court" presided over by a chatbot has handled 3m legal procedures since March

The Chinese Supreme People’s Court has just released a report on a "mobile court" pilot program that's been running since March to manage procedures in civil legal disputes through the Wechat social media platform, through which litigants are prompted by an AI chatbot "judge" (with a judicial avatar) to state their cases; the evidence is entered into the blockchain. Read the rest

Model stealing, rewarding hacking and poisoning attacks: a taxonomy of machine learning's failure modes

A team of researchers from Microsoft and Harvard's Berkman Center have published a taxonomy of "Failure Modes in Machine Learning," broken down into "Intentionally-Motivated Failures" and "Unintended Failures." Read the rest

Second wave Algorithmic Accountability: from "What should algorithms do?" to "Should we use an algorithm?"

For ten years, activists and theorists have been developing a critique of "algorithms" (which have undergone numerous renamings over the same time, e.g. "filter bubbles"), with the early critiques focusing on the way that these can misfire with dreadful (or sometimes humorous) consequences, from discrimination in which employment and financial ads get served to the "dark patterns" that "maximized engagement" with services that occupied your attention but didn't bring you pleasure. Read the rest

Cards Against Humanity's Thanksgiving livestream pits a machine learning model against human joke writers

Cards Against Humanity asked Spencer Kelly to teach a computer to write mean, funny joke-cards for a new, AI-based expansion pack to the game; Kelly trained the popular GPT-2 generative language model (previously) on existing cards, and now the company is livestreaming a 16-hour competition between its AI and its human joke-writers, with a voting system to up/downvote the resulting jokes (at the end of the day, these votes will be "tallied up and thrown in the garbage"). You can choose to buy the resulting packs, and if the human team outsells the robots, it will receive a $5,000 bonus. If they fail, they will all be fired. Read the rest

Tiny alterations in training data can introduce "backdoors" into machine learning models

In TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents, a group of Boston University researchers demonstrate an attack on machine learning systems trained with "reinforcement learning" in which ML systems derive solutions to complex problems by iteratively trying multiple solutions. Read the rest

How to recognize AI snake oil

Princeton computer scientist Arvind Narayanan (previously) has posted slides and notes from a recent MIT talk on "How to recognize AI snake oil" in which he divides AI applications into three (nonexhaustive) categories and rates how difficult they are, and thus whether you should believe vendors who claim that their machine learning models can perform as advertised. Read the rest

Naomi Kritzer's "Catfishing on the CatNet": an AI caper about the true nature of online friendship

Back in 2016, Naomi Kritzer won the Hugo award for her brilliant, endearing story Cat Pictures Please, in which an AI with an insatiable craving for cat pictures explains its view on the world and the way that it makes humans' lives better; now Kritzer has adapted the story into her new novel, the equally brilliant, superbly plotted YA thriller Catfishing on CatNet. Read the rest

Make: a facial-recognition confounding "Opt Out Cap"

Mac Pierce created a simple wearable to challenge facial recognition: do a little munging to an image of a face, print it on heat transfer paper, iron it onto see-through mosquito netting, slice, and affix to a billed cap -- deploy it in the presence of facial recognition cameras and you'll be someone else. It's the kind of "adversarial example" countermeasure that fools computers pretty reliably but wouldn't work on a human. (via JWZ) Read the rest

"Affordances": a new science fiction story that climbs the terrible technology adoption curve

"Affordances" is my new science fiction story for Slate/ASU's Future Tense project; it's a tale exploring my theory of "the shitty technology adoption curve," in which terrible technological ideas are first imposed on poor and powerless people, and then refined and normalized until they are spread over all the rest of us. Read the rest

Educational spyware company to school boards: hire us to spy on your kids and we'll help you sabotage teachers' strikes

Gaggle is one of a handful of creepy companies that sell surveillance software to school districts, which monitor every keystroke and click on school networks -- they're the latest evolution in spy-on-kids tech, which started off by promising that they'd stop kids from seeing porn, then promised they could end bullying, and now advertise themselves as a solution for school shootings, under the banner of being a "Safety Management Platform." Read the rest

When the HR department is a robotic phrenologist: "face-scanning algorithm" gains popularity as a job-applicant screener

Hirevue is an "AI" company that companies contract with to screen job applicants: it conducts an hour-long videoconference session with applicants, analyzing their facial expressions, word-choices and other factors (the company does not actually explain what these are, nor have they ever subjected their system to independent scrutiny) and makes recommendations about who should get the job. Read the rest

Training bias in AI "hate speech detector" means that tweets by Black people are far more likely to be censored

More bad news for Google's beleaguered spinoff Jigsaw, whose flagship project is "Perspective," a machine-learning system designed to catch and interdict harassment, hate-speech and other undesirable online speech. Read the rest

More posts