The bubbles in VR, cryptocurrency and machine learning are all part of the parallel computing bubble

Yesterday's column by John Naughton in the Observer revisited Nathan Myhrvold's 1997 prediction that when Moore's Law runs out -- that is, when processors stop doubling in speed every 18 months through an unbroken string of fundamental breakthroughs -- that programmers would have to return to the old disciplines of writing incredibly efficient code whose main consideration was the limits of the computer that runs on it. Read the rest

Wireheading: when machine learning systems jolt their reward centers by cheating

Machine learning systems are notorious for cheating, and there's a whole menagerie of ways that these systems achieve their notional goals while subverting their own purpose, with names like "model stealing, rewarding hacking and poisoning attacks." Read the rest

Machine learning is innately conservative and wants you to either act like everyone else, or never change

Next month, I'm giving a keynote talk at The Future of the Future: The Ethics and Implications of AI, an event at UC Irvine that features Bruce Sterling, Rose Eveleth, David Kaye, and many others! Read the rest

AI generates old-fashioned zoological illustrations of beetles

These beetles do not exist: Confusing Coleopterists is an AI trained on illustrations from zoological textbooks. The extreme formality of this art genre, and its placement within the public domain, makes it uniquely apt to the medium of generative adversarial networks: "Results were interesting and mesmerising."

I set up a machine at PaperSpace with 1 GPU (According to NVIDIA’s repository, running StyleGan on 256px images takes over 14 days with 1 Tesla GPU) 😅

I trained it with 128px images and ran it for 3 days, costing €125.

Results were nice! but tiny. ... I loaded the beetle dataset and trained it at full 1024px, [on top of the FlickrHD model] and after 3000 steps the results were very nice.

No-one below Ph.D. level should ever trust an illustration of a beetle again! Read the rest

Using Stylegan to age everyone in 1985's hit video "Cry"

Shardcore (previously) writes, "I took Godley & Creme's seminal 1985 video and sent it through a StyleGAN network." Read the rest

A vast network of shadowy news sites promote conservative talking points mixed with floods of algorithmically generated "news"

Columbia Journalism School's Tow Center has published the results of a longrunning investigation into a network of shadowy "local news" sites whose bulk content is composed of algorithmically generated stories about subjects like local rates of gas tax, or which bridges are most structurally sound. Read the rest

AI Now's annual report: stop doing "emotion detection"; stop "socially sensitive" facial recognition; make AI research diverse and representative -- and more

Every year, the AI Now Institute (previously) publishes a deep, thoughtful, important overview of where AI research is and the ethical gaps in AI's use, and makes a list of a dozen urgent recommendations for the industry, the research community, and regulators and governments. Read the rest

A Wechat-based "mobile court" presided over by a chatbot has handled 3m legal procedures since March

The Chinese Supreme People’s Court has just released a report on a "mobile court" pilot program that's been running since March to manage procedures in civil legal disputes through the Wechat social media platform, through which litigants are prompted by an AI chatbot "judge" (with a judicial avatar) to state their cases; the evidence is entered into the blockchain. Read the rest

Model stealing, rewarding hacking and poisoning attacks: a taxonomy of machine learning's failure modes

A team of researchers from Microsoft and Harvard's Berkman Center have published a taxonomy of "Failure Modes in Machine Learning," broken down into "Intentionally-Motivated Failures" and "Unintended Failures." Read the rest

Second wave Algorithmic Accountability: from "What should algorithms do?" to "Should we use an algorithm?"

For ten years, activists and theorists have been developing a critique of "algorithms" (which have undergone numerous renamings over the same time, e.g. "filter bubbles"), with the early critiques focusing on the way that these can misfire with dreadful (or sometimes humorous) consequences, from discrimination in which employment and financial ads get served to the "dark patterns" that "maximized engagement" with services that occupied your attention but didn't bring you pleasure. Read the rest

Cards Against Humanity's Thanksgiving livestream pits a machine learning model against human joke writers

Cards Against Humanity asked Spencer Kelly to teach a computer to write mean, funny joke-cards for a new, AI-based expansion pack to the game; Kelly trained the popular GPT-2 generative language model (previously) on existing cards, and now the company is livestreaming a 16-hour competition between its AI and its human joke-writers, with a voting system to up/downvote the resulting jokes (at the end of the day, these votes will be "tallied up and thrown in the garbage"). You can choose to buy the resulting packs, and if the human team outsells the robots, it will receive a $5,000 bonus. If they fail, they will all be fired. Read the rest

Networked authoritarianism may contain the seeds of its own undoing

James Scott's 1998 classic Seeing Like a State describes how governments shoehorn the governed into countable, manageable ways of living and working so that they can be counted and steered by state bureaucracies. Political scientist Henry Farrell (previously) discusses how networked authoritarianism is touted by its advocates as a way of resolving the problems of state-like seeing, because if a state spies on people enough and allows machine-learning systems to incorporate their behavior and respond to it, it is possible to create "a more efficient competitor that can beat democracy at its home game" -- providing for everyone's needs better than a democracy could. Read the rest

Tiny alterations in training data can introduce "backdoors" into machine learning models

In TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents, a group of Boston University researchers demonstrate an attack on machine learning systems trained with "reinforcement learning" in which ML systems derive solutions to complex problems by iteratively trying multiple solutions. Read the rest

How to recognize AI snake oil

Princeton computer scientist Arvind Narayanan (previously) has posted slides and notes from a recent MIT talk on "How to recognize AI snake oil" in which he divides AI applications into three (nonexhaustive) categories and rates how difficult they are, and thus whether you should believe vendors who claim that their machine learning models can perform as advertised. Read the rest

Naomi Kritzer's "Catfishing on the CatNet": an AI caper about the true nature of online friendship

Back in 2016, Naomi Kritzer won the Hugo award for her brilliant, endearing story Cat Pictures Please, in which an AI with an insatiable craving for cat pictures explains its view on the world and the way that it makes humans' lives better; now Kritzer has adapted the story into her new novel, the equally brilliant, superbly plotted YA thriller Catfishing on CatNet. Read the rest

This software can clone a person's voice by listening to a 5-second sample

Here are audio samples from a neural network based system that can analyze a 5-second recording of someone's voice, then have that same voice say anything you want it to say.

Someone made a version of the software that you can download from Github. Read the rest

New software separates vocals and instruments in recorded music

Our pal Andy Baio has been experimenting with an audio separation library called spleeter "that uses machine learning to quickly and freely separate music into stems." It does a surprisingly good job, and he is using it to make funny mashups:

Read the rest

More posts