Second wave Algorithmic Accountability: from "What should algorithms do?" to "Should we use an algorithm?"

For ten years, activists and theorists have been developing a critique of "algorithms" (which have undergone numerous renamings over the same time, e.g. "filter bubbles"), with the early critiques focusing on the way that these can misfire with dreadful (or sometimes humorous) consequences, from discrimination in which employment and financial ads get served to the "dark patterns" that "maximized engagement" with services that occupied your attention but didn't bring you pleasure. Read the rest

Cards Against Humanity's Thanksgiving livestream pits a machine learning model against human joke writers

Cards Against Humanity asked Spencer Kelly to teach a computer to write mean, funny joke-cards for a new, AI-based expansion pack to the game; Kelly trained the popular GPT-2 generative language model (previously) on existing cards, and now the company is livestreaming a 16-hour competition between its AI and its human joke-writers, with a voting system to up/downvote the resulting jokes (at the end of the day, these votes will be "tallied up and thrown in the garbage"). You can choose to buy the resulting packs, and if the human team outsells the robots, it will receive a $5,000 bonus. If they fail, they will all be fired. Read the rest

Networked authoritarianism may contain the seeds of its own undoing

James Scott's 1998 classic Seeing Like a State describes how governments shoehorn the governed into countable, manageable ways of living and working so that they can be counted and steered by state bureaucracies. Political scientist Henry Farrell (previously) discusses how networked authoritarianism is touted by its advocates as a way of resolving the problems of state-like seeing, because if a state spies on people enough and allows machine-learning systems to incorporate their behavior and respond to it, it is possible to create "a more efficient competitor that can beat democracy at its home game" -- providing for everyone's needs better than a democracy could. Read the rest

Tiny alterations in training data can introduce "backdoors" into machine learning models

In TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents, a group of Boston University researchers demonstrate an attack on machine learning systems trained with "reinforcement learning" in which ML systems derive solutions to complex problems by iteratively trying multiple solutions. Read the rest

How to recognize AI snake oil

Princeton computer scientist Arvind Narayanan (previously) has posted slides and notes from a recent MIT talk on "How to recognize AI snake oil" in which he divides AI applications into three (nonexhaustive) categories and rates how difficult they are, and thus whether you should believe vendors who claim that their machine learning models can perform as advertised. Read the rest

Naomi Kritzer's "Catfishing on the CatNet": an AI caper about the true nature of online friendship

Back in 2016, Naomi Kritzer won the Hugo award for her brilliant, endearing story Cat Pictures Please, in which an AI with an insatiable craving for cat pictures explains its view on the world and the way that it makes humans' lives better; now Kritzer has adapted the story into her new novel, the equally brilliant, superbly plotted YA thriller Catfishing on CatNet. Read the rest

This software can clone a person's voice by listening to a 5-second sample

Here are audio samples from a neural network based system that can analyze a 5-second recording of someone's voice, then have that same voice say anything you want it to say.

Someone made a version of the software that you can download from Github. Read the rest

New software separates vocals and instruments in recorded music

Our pal Andy Baio has been experimenting with an audio separation library called spleeter "that uses machine learning to quickly and freely separate music into stems." It does a surprisingly good job, and he is using it to make funny mashups:

Read the rest

Report from a massive Chinese surveillance tech expo, where junk-science "emotion recognition" rules

Sue-Lin Wong is the Financial Times's South China reporter; this week, she attended the China Public Security expo, the country's largest surveillance tech show, held biannually in Shenzhen. Read the rest

"Affordances": a new science fiction story that climbs the terrible technology adoption curve

"Affordances" is my new science fiction story for Slate/ASU's Future Tense project; it's a tale exploring my theory of "the shitty technology adoption curve," in which terrible technological ideas are first imposed on poor and powerless people, and then refined and normalized until they are spread over all the rest of us. Read the rest

Educational spyware company to school boards: hire us to spy on your kids and we'll help you sabotage teachers' strikes

Gaggle is one of a handful of creepy companies that sell surveillance software to school districts, which monitor every keystroke and click on school networks -- they're the latest evolution in spy-on-kids tech, which started off by promising that they'd stop kids from seeing porn, then promised they could end bullying, and now advertise themselves as a solution for school shootings, under the banner of being a "Safety Management Platform." Read the rest

Researchers think that adversarial examples could help us maintain privacy from machine learning systems

Machine learning systems are pretty good at finding hidden correlations in data and using them to infer potentially compromising information about the people who generate that data: for example, researchers fed an ML system a bunch of Google Play reviews by reviewers whose locations were explicitly given in their Google Plus reviews; based on this, the model was able to predict the locations of other Google Play reviewers with about 44% accuracy. Read the rest

Rage Inside the Machine: an insightful, brilliant critique of AI's computer science, sociology, philosophy and economics

[I ran a review of this in June when the UK edition came out -- this review coincides with the US edition's publication]

Rob Smith is an eminent computer scientist and machine learning pioneer whose work on genetic algorithms has been influential in both industry and the academy; now, in his first book for a general audience, Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All, Smith expertly draws connections between AI, neoliberalism, human bias, eugenics and far-right populism, and shows how the biases of computer science and the corporate paymasters have distorted our whole society. Read the rest

Announcement of Tumblr's sale to WordPress classified as pornography by Tumblr's notorious "adult content" filter

Tumblr is being sold to WordPress parent company Automattic for a reported price of "less than $3m," a substantial decline from the $1.1B Yahoo paid for the company in 2013 (Yahoo subsequently sold Tumblr and several other startups it had overpaid for and then ruined to Verizon for more than $4b). Read the rest

Training bias in AI "hate speech detector" means that tweets by Black people are far more likely to be censored

More bad news for Google's beleaguered spinoff Jigsaw, whose flagship project is "Perspective," a machine-learning system designed to catch and interdict harassment, hate-speech and other undesirable online speech. Read the rest

Ad copy written with AI outperformed human-written copy

Which ad copy for a banking service is more effective?

A) “Access cash from the equity in your home.”

or

B) “It’s true—You can unlock cash from the equity in your home.”

If you answered B, you are correct.  It did better with Chase Bank customers than A did.

Answer B was written by a machine learning language model developed by Persado, "a New York-based company that applies artificial intelligence to marketing creative," according to Ad Age.

From the article:

Kristin Lemkau, chief marketing officer of JPMorgan Chase, noted that machine learning can actually help achieve more humanity in marketing. “Persado’s technology is incredibly promising,” she said in a statement. “It rewrote copy and headlines that a marketer, using subjective judgment and their experience, likely wouldn’t have.”

Chase plans to use Persado for the ideation stage of creating marketing copy on display ads, Facebook ads and in direct mail, according to Yuval Efrati, chief customer officer at seven-year-old Persado. He says that the AI company works alongside Chase’s marketing team and its agencies.

[via Digg]

Image: Shutterstock (modified) Read the rest

Open archive of 240,000 hours' worth of talk radio, including 2.8 billion words of machine-transcription

A group of MIT Media Lab researchers have published Radiotalk, a massive corpus of talk radio audio with machine-generated transcriptions, with a total of 240,000 hours' worth of speech, marked up with machine-readable metadata. Read the rest

More posts