"Edge AI": encapsulating machine learning classifiers in lightweight, energy-efficient, airgapped chips

Writing in Wired, Boing Boing contributor Clive Thompson discusses the rise and rise of "Edge AI" startups that sell lightweight machine-learning classifiers that run on low-powered chips and don't talk to the cloud, meaning that they are privacy respecting and energy efficient.

The bubbles in VR, cryptocurrency and machine learning are all part of the parallel computing bubble

Yesterday's column by John Naughton in the Observer revisited Nathan Myhrvold's 1997 prediction that when Moore's Law runs out — that is, when processors stop doubling in speed every 18 months through an unbroken string of fundamental breakthroughs — that programmers would have to return to the old disciplines of writing incredibly efficient code whose […]

Wireheading: when machine learning systems jolt their reward centers by cheating

Machine learning systems are notorious for cheating, and there's a whole menagerie of ways that these systems achieve their notional goals while subverting their own purpose, with names like "model stealing, rewarding hacking and poisoning attacks."

Machine learning is innately conservative and wants you to either act like everyone else, or never change

Next month, I'm giving a keynote talk at The Future of the Future: The Ethics and Implications of AI, an event at UC Irvine that features Bruce Sterling, Rose Eveleth, David Kaye, and many others!

AI generates old-fashioned zoological illustrations of beetles

These beetles do not exist: Confusing Coleopterists is an AI trained on illustrations from zoological textbooks. The extreme formality of this art genre, and its placement within the public domain, makes it uniquely apt to the medium of generative adversarial networks: "Results were interesting and mesmerising." I set up a machine at PaperSpace with 1 […]

Using Stylegan to age everyone in 1985's hit video "Cry"

Shardcore (previously) writes, "I took Godley & Creme's seminal 1985 video and sent it through a StyleGAN network."

A vast network of shadowy news sites promote conservative talking points mixed with floods of algorithmically generated "news"

Columbia Journalism School's Tow Center has published the results of a longrunning investigation into a network of shadowy "local news" sites whose bulk content is composed of algorithmically generated stories about subjects like local rates of gas tax, or which bridges are most structurally sound.

AI Now's annual report: stop doing "emotion detection"; stop "socially sensitive" facial recognition; make AI research diverse and representative — and more

Every year, the AI Now Institute (previously) publishes a deep, thoughtful, important overview of where AI research is and the ethical gaps in AI's use, and makes a list of a dozen urgent recommendations for the industry, the research community, and regulators and governments.

A Wechat-based "mobile court" presided over by a chatbot has handled 3m legal procedures since March

The Chinese Supreme People's Court has just released a report on a "mobile court" pilot program that's been running since March to manage procedures in civil legal disputes through the Wechat social media platform, through which litigants are prompted by an AI chatbot "judge" (with a judicial avatar) to state their cases; the evidence is […]

Model stealing, rewarding hacking and poisoning attacks: a taxonomy of machine learning's failure modes

A team of researchers from Microsoft and Harvard's Berkman Center have published a taxonomy of "Failure Modes in Machine Learning," broken down into "Intentionally-Motivated Failures" and "Unintended Failures."

Second wave Algorithmic Accountability: from "What should algorithms do?" to "Should we use an algorithm?"

For ten years, activists and theorists have been developing a critique of "algorithms" (which have undergone numerous renamings over the same time, e.g. "filter bubbles"), with the early critiques focusing on the way that these can misfire with dreadful (or sometimes humorous) consequences, from discrimination in which employment and financial ads get served to the […]

Cards Against Humanity's Thanksgiving livestream pits a machine learning model against human joke writers

Cards Against Humanity asked Spencer Kelly to teach a computer to write mean, funny joke-cards for a new, AI-based expansion pack to the game; Kelly trained the popular GPT-2 generative language model (previously) on existing cards, and now the company is livestreaming a 16-hour competition between its AI and its human joke-writers, with a voting […]

Networked authoritarianism may contain the seeds of its own undoing

James Scott's 1998 classic Seeing Like a State describes how governments shoehorn the governed into countable, manageable ways of living and working so that they can be counted and steered by state bureaucracies. Political scientist Henry Farrell (previously) discusses how networked authoritarianism is touted by its advocates as a way of resolving the problems of […]

Tiny alterations in training data can introduce "backdoors" into machine learning models

In TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents, a group of Boston University researchers demonstrate an attack on machine learning systems trained with "reinforcement learning" in which ML systems derive solutions to complex problems by iteratively trying multiple solutions.

How to recognize AI snake oil

Princeton computer scientist Arvind Narayanan (previously) has posted slides and notes from a recent MIT talk on "How to recognize AI snake oil" in which he divides AI applications into three (nonexhaustive) categories and rates how difficult they are, and thus whether you should believe vendors who claim that their machine learning models can perform […]

Naomi Kritzer's "Catfishing on the CatNet": an AI caper about the true nature of online friendship

Back in 2016, Naomi Kritzer won the Hugo award for her brilliant, endearing story Cat Pictures Please, in which an AI with an insatiable craving for cat pictures explains its view on the world and the way that it makes humans' lives better; now Kritzer has adapted the story into her new novel, the equally […]

This software can clone a person's voice by listening to a 5-second sample

Here are audio samples from a neural network based system that can analyze a 5-second recording of someone's voice, then have that same voice say anything you want it to say. Someone made a version of the software that you can download from Github.

More posts