How to beat AI facial expression software for screening job seekers: "smile with your eyes"

If you are trying to find work in South Korea, you are likely to be interviewed by a bot that uses AI to scan your facial expressions to determine whether or not you are right for the job. Read the rest

The bubbles in VR, cryptocurrency and machine learning are all part of the parallel computing bubble

Yesterday's column by John Naughton in the Observer revisited Nathan Myhrvold's 1997 prediction that when Moore's Law runs out -- that is, when processors stop doubling in speed every 18 months through an unbroken string of fundamental breakthroughs -- that programmers would have to return to the old disciplines of writing incredibly efficient code whose main consideration was the limits of the computer that runs on it. Read the rest

Wireheading: when machine learning systems jolt their reward centers by cheating

Machine learning systems are notorious for cheating, and there's a whole menagerie of ways that these systems achieve their notional goals while subverting their own purpose, with names like "model stealing, rewarding hacking and poisoning attacks." Read the rest

AI, machine learning, and other frothy tech subjects remained overhyped in 2019

Rodney Brooks (previously) is a distinguished computer scientist and roboticist (he's served as as head of MIT's Computer Science and Artificial Intelligence Laboratory and CTO of Irobot); two years ago, he published a list of "dated predictions" intended to cool down some of the hype about self-driving cars, machine learning, and robotics, hype that he viewed as dangerously gaseous. Read the rest

Machine learning is innately conservative and wants you to either act like everyone else, or never change

Next month, I'm giving a keynote talk at The Future of the Future: The Ethics and Implications of AI, an event at UC Irvine that features Bruce Sterling, Rose Eveleth, David Kaye, and many others! Read the rest

AI generates old-fashioned zoological illustrations of beetles

These beetles do not exist: Confusing Coleopterists is an AI trained on illustrations from zoological textbooks. The extreme formality of this art genre, and its placement within the public domain, makes it uniquely apt to the medium of generative adversarial networks: "Results were interesting and mesmerising."

I set up a machine at PaperSpace with 1 GPU (According to NVIDIA’s repository, running StyleGan on 256px images takes over 14 days with 1 Tesla GPU) 😅

I trained it with 128px images and ran it for 3 days, costing €125.

Results were nice! but tiny. ... I loaded the beetle dataset and trained it at full 1024px, [on top of the FlickrHD model] and after 3000 steps the results were very nice.

No-one below Ph.D. level should ever trust an illustration of a beetle again! Read the rest

AI Now's annual report: stop doing "emotion detection"; stop "socially sensitive" facial recognition; make AI research diverse and representative -- and more

Every year, the AI Now Institute (previously) publishes a deep, thoughtful, important overview of where AI research is and the ethical gaps in AI's use, and makes a list of a dozen urgent recommendations for the industry, the research community, and regulators and governments. Read the rest

A Wechat-based "mobile court" presided over by a chatbot has handled 3m legal procedures since March

The Chinese Supreme People’s Court has just released a report on a "mobile court" pilot program that's been running since March to manage procedures in civil legal disputes through the Wechat social media platform, through which litigants are prompted by an AI chatbot "judge" (with a judicial avatar) to state their cases; the evidence is entered into the blockchain. Read the rest

Model stealing, rewarding hacking and poisoning attacks: a taxonomy of machine learning's failure modes

A team of researchers from Microsoft and Harvard's Berkman Center have published a taxonomy of "Failure Modes in Machine Learning," broken down into "Intentionally-Motivated Failures" and "Unintended Failures." Read the rest

I played this AI-generated text adventure as the Mandalorian and here's what happened

AI Dungeon 2 is designed by Nick Walton to achieve an ambitious goal:

Imagine an infinitely generated world that you could explore endlessly, continually finding entirely new content and adventures. What if you could also choose any action you can think of instead of being limited by the imagination of the developers who created the game?

The screencap above is from an adventure Nick found posted at Hacker News.

I decided to try a few games as The Mandalorian. For my first attempt I used the recommended fantasy setting, and chose to play as a knight named Mando. Here's what unfolded:

You are Mando, a knight living in the kingdom of Larion. You have a steel longsword and a wooden shield. You are on a quest to defeat the evil dragon of Larion. You've heard he lives up at the north of the kingdom. You set on the path to defeat him and walk into a dark forest. As you enter the forest you see a large black dragon flying through the sky. He looks like it is ready to eat you

> compare dragon's likeness to my bounty puck

The black dragon looks very similar to your bounty puck. The only difference is that this one has a red stripe down its side. You decide to take the dragon for yourself.

> use grappling hook to climb onto dragon

You grab hold of the dragon's neck with your hand and pull yourself up onto his back. You then begin climbing up towards his head.

Read the rest

Second wave Algorithmic Accountability: from "What should algorithms do?" to "Should we use an algorithm?"

For ten years, activists and theorists have been developing a critique of "algorithms" (which have undergone numerous renamings over the same time, e.g. "filter bubbles"), with the early critiques focusing on the way that these can misfire with dreadful (or sometimes humorous) consequences, from discrimination in which employment and financial ads get served to the "dark patterns" that "maximized engagement" with services that occupied your attention but didn't bring you pleasure. Read the rest

Peter Thiel claims AI is "Leninist" and "literally communist" in a sprawling speech for a think tank

On November 13, noted vampire capitalist Peter Thiel gave a speech to donors at the Manhattan Institute for Policy Research conservative think-tank on "The End of the Computer Age." Over the course of 40 minutes, he covered a lot of topics—some of which were at least provocative, some of them which sounded like they were ripped straight out of a Gavin Belson speech on Silicon Valley.

There was a, um, interesting reflection on the shortcomings of Margaret Thatcher's regime:

I sort of think of Margaret Thatcher's biggest mistake was she thought that in the late '70s that embracing the EU would be a way to crush the unions in the UK.

An interesting (accidental?) critique of the Electoral College system:

In a democracy if you have sort of majority vote, that's good. If you have a supermajority, that's even better. So if you got 51%, you're probably right. If you get 70%, you're even more right. On the other hand, if you get 99.99% of the voters, you're sort of in North Korea. [laughter]

The "greatest lie" Obama ever told (which is, he notes, not about Iraq or healthcare):

"Just because it's not some name-brand, famous, fancy school, doesn't mean that you're not going to get a great education there." So let's parse that two lies. First off, if it isn't a name-brand, famous, fancy school, you're not going to get a great education. You're just going to get a diploma that's a dunce hat in disguise. If it is a name-brand, famous, fancy school, you probably also won't get an education.

Read the rest

Cards Against Humanity's Thanksgiving livestream pits a machine learning model against human joke writers

Cards Against Humanity asked Spencer Kelly to teach a computer to write mean, funny joke-cards for a new, AI-based expansion pack to the game; Kelly trained the popular GPT-2 generative language model (previously) on existing cards, and now the company is livestreaming a 16-hour competition between its AI and its human joke-writers, with a voting system to up/downvote the resulting jokes (at the end of the day, these votes will be "tallied up and thrown in the garbage"). You can choose to buy the resulting packs, and if the human team outsells the robots, it will receive a $5,000 bonus. If they fail, they will all be fired. Read the rest

Tiny alterations in training data can introduce "backdoors" into machine learning models

In TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents, a group of Boston University researchers demonstrate an attack on machine learning systems trained with "reinforcement learning" in which ML systems derive solutions to complex problems by iteratively trying multiple solutions. Read the rest

How to recognize AI snake oil

Princeton computer scientist Arvind Narayanan (previously) has posted slides and notes from a recent MIT talk on "How to recognize AI snake oil" in which he divides AI applications into three (nonexhaustive) categories and rates how difficult they are, and thus whether you should believe vendors who claim that their machine learning models can perform as advertised. Read the rest

Naomi Kritzer's "Catfishing on the CatNet": an AI caper about the true nature of online friendship

Back in 2016, Naomi Kritzer won the Hugo award for her brilliant, endearing story Cat Pictures Please, in which an AI with an insatiable craving for cat pictures explains its view on the world and the way that it makes humans' lives better; now Kritzer has adapted the story into her new novel, the equally brilliant, superbly plotted YA thriller Catfishing on CatNet. Read the rest

Make: a facial-recognition confounding "Opt Out Cap"

Mac Pierce created a simple wearable to challenge facial recognition: do a little munging to an image of a face, print it on heat transfer paper, iron it onto see-through mosquito netting, slice, and affix to a billed cap -- deploy it in the presence of facial recognition cameras and you'll be someone else. It's the kind of "adversarial example" countermeasure that fools computers pretty reliably but wouldn't work on a human. (via JWZ) Read the rest

More posts