Make: a facial-recognition confounding "Opt Out Cap"

Mac Pierce created a simple wearable to challenge facial recognition: do a little munging to an image of a face, print it on heat transfer paper, iron it onto see-through mosquito netting, slice, and affix to a billed cap -- deploy it in the presence of facial recognition cameras and you'll be someone else. It's the kind of "adversarial example" countermeasure that fools computers pretty reliably but wouldn't work on a human. (via JWZ) Read the rest

AI randomizes and animates scenes from a single image

SinGAN is a generative AI that can, among other things, create random variations of an image (or animate it) without knowing what it is an image of. The official implementation is on Github and shows pretty landscapes:

It can also accomplish useful design tasks such as harmonizing color gradients and adding illusory resolution to blurred images.

But I feel that a humorous yet disquieting portrait of the President of the United States of America is in order, courtesy of Jonathan Fly.

Read the rest

The mystical Kabbalah roots of natural language processing

With Siri and Alexa, the computer science of natural language processing (NLP) is finally ready for prime time. In IEEE Spectrum, Oscar Schwartz wrote a fascinating essay linking NLP, "linguistic interactions between humans and machines," with 13th century Jewish mysticism. I've always enjoyed smart writing that pulls threads between technology and occult practices, and Schwartz's short piece is a fine example of that. From IEEE Spectrum:

In the late 1200s, a Jewish mystic by the name of Abraham Abulafia sat down at a table in his small house in Barcelona, picked up a quill, dipped it in ink, and began combining the letters of the Hebrew alphabet in strange and seemingly random ways. Aleph with Bet, Bet with Gimmel, Gimmel with Aleph and Bet, and so on.

Abulafia called this practice “the science of the combination of letters.” He wasn’t actually combining letters at random; instead he was carefully following a secret set of rules that he had devised while studying an ancient Kabbalistic text called the Sefer Yetsirah. This book describes how God created “all that is formed and all that is spoken” by combining Hebrew letters according to sacred formulas. In one section, God exhausts all possible two-letter combinations of the 22 Hebrew letters.

By studying the Sefer Yetsirah, Abulafia gained the insight that linguistic symbols can be manipulated with formal rules in order to create new, interesting, insightful sentences. To this end, he spent months generating thousands of combinations of the 22 letters of the Hebrew alphabet and eventually emerged with a series of books that he claimed were endowed with prophetic wisdom.

Read the rest

Report from a massive Chinese surveillance tech expo, where junk-science "emotion recognition" rules

Sue-Lin Wong is the Financial Times's South China reporter; this week, she attended the China Public Security expo, the country's largest surveillance tech show, held biannually in Shenzhen. Read the rest

AI plays StarCraft II at "Grandmaster" level

StarCraft II

AlphaStar is reportedly the first artificial intelligence to play the computer game StarCraft II at a "Grandmaster" level under standard competitive conditions.

1. AlphaStar now has the same kind of constraints that humans play under – including viewing the world through a camera, and stronger limits on the frequency of its actions* (in collaboration with StarCraft professional Dario “TLO” Wünsch).

2. AlphaStar can now play in one-on-one matches as and against Protoss, Terran, and Zerg – the three races present in StarCraft II. Each of the Protoss, Terran, and Zerg agents is a single neural network.

3. The League training is fully automated, and starts only with agents trained by supervised learning, rather than from previously trained agents from past experiments.

4. AlphaStar played on the official game server, Battle.net, using the same maps and conditions as human players. All game replays are available here.

It's all thanks to multi-agent reinforcement learning. *nods authoritatively* Read the rest

"Affordances": a new science fiction story that climbs the terrible technology adoption curve

"Affordances" is my new science fiction story for Slate/ASU's Future Tense project; it's a tale exploring my theory of "the shitty technology adoption curve," in which terrible technological ideas are first imposed on poor and powerless people, and then refined and normalized until they are spread over all the rest of us. Read the rest

Educational spyware company to school boards: hire us to spy on your kids and we'll help you sabotage teachers' strikes

Gaggle is one of a handful of creepy companies that sell surveillance software to school districts, which monitor every keystroke and click on school networks -- they're the latest evolution in spy-on-kids tech, which started off by promising that they'd stop kids from seeing porn, then promised they could end bullying, and now advertise themselves as a solution for school shootings, under the banner of being a "Safety Management Platform." Read the rest

The second book in Martha Well's 'Murderbot Diaries' is wonderful

In Artificial Condition, Martha Well's soap opera loving rogue security AI remains cantankerous and awesome.

Murderbot is an AI security robot with a busted autonomy regulator. So long as they can keep the regulator a secret, they can remain fully aware and independent. Mostly they want to watch soap operas. Soap operas and to be left the hell alone.

I absolutely adore Murderbot. Murderbot wants quality time on their own.

In the second installment Murderbot sets out to learn about the event from which they named themselves, wherein many humans died and their AI regulator was broken. Murderbot has no direct recollection of what went on and believes this knowledge will change everything.

Murderbot teams up with an AI research ship named ART and heads off to the mining colony where it all went down.

Artificial Condition: The Murderbot Diaries Book 2 via Amazon Read the rest

When the HR department is a robotic phrenologist: "face-scanning algorithm" gains popularity as a job-applicant screener

Hirevue is an "AI" company that companies contract with to screen job applicants: it conducts an hour-long videoconference session with applicants, analyzing their facial expressions, word-choices and other factors (the company does not actually explain what these are, nor have they ever subjected their system to independent scrutiny) and makes recommendations about who should get the job. Read the rest

The worst thing about deepfakes is that we know about them

Roko's Basilisk is a notorious thought experiment regarding artificial intelligence and our own perceptions of reality, particularly as it relates to a hypothetically powerful AI. It's kind of like Newcomb's Paradox, with a little more Battlestar Galactica-style AI genocide.

If you want to know more about it, feel free to click the link. But be warned: the whole point of Roko's Basilisk is that the mere knowledge of Roko's Basilisk also makes you complicit in Roko's Basilisk. If Roko's Basilisk is real—a question which is intrinsic to the thought experiment itself—then the potential contained within that hypothetical idea is enough to sow the seeds to self-destructive doubt. And that's how Roko's Basilisk wins.

You don't need to know the specific details of Roko's Basilisk to understand how the concept could relate to the growing phenomenon of deepfakes—the manipulation of deep Learning technology to create deceptively realistic videos, like adding Nicholas Cage's face into every movie. The cybersecurity firm DeepTrace recently released a report on the myriad ways that deepfakes threaten our trust in knowledge, and in our own eyes. And their conclusion? The mere idea of deepfakes is enough to bring the worst case scenario to life—even if we never actually reach that worst case scenario in practice.

Nicholas Cage as Amy Adams, because Deepfakes.

In reality, deepfakes haven't actually been used to successfully falsify videos of politicians to use as large-scale propaganda; like most things on the internet, they're mostly used for porn. But the fact that they could be used to deceive the public is itself enough to make public trust spiral downwards, causing us to debate both what is true, and the methods by which we determine what is true. Read the rest

Watch: OpenAI enabled a one-handed robot to solve a Rubik's Cube

OpenAI Inc. demonstrated a one-handed robot solving a Rubik's Cube. Apparently the real breakthrough in this milestone was teaching the system to do the task in simulation. “While the video makes it easy to focus on the physical robot, the magic is mostly happening in simulation, and transferring things learned in simulation to the real world," writes Evan Ackerman in IEEE Spectrum:

The researchers point out that the method they’ve developed here is general purpose, and you can train a real-world robot to do pretty much any task that you can adequately simulate. You don’t need any real-world training at all, as long as your simulations are diverse enough, which is where the automatic domain randomization comes in. The long-term goal is to reduce the task specialization that’s inherent to most robots, which will help them be more useful and adaptable in real-world applications.

Read the rest

Jim Meskimen, deepfake face dancer

If you've been enjoying those videos where Bill Hader's face morphs uncannily into whoever he's impersonating, this one from Jim Meskimen is the overwhelming overdose. It's a perfect intentional demonstration of the special effect, generated by adversarial networks, an enchanted warning.

Actor/impressionist Jim Meskimen (Parks & Recreation, Whose Line?, The Marvelous Mrs. Maisel) recites "Pity the Poor Impressionist" poem in 20 celebrity voices, with the help of SHAM00K.

Read the rest

Which face is real?

Which face is real? is one of those sites where it makes you choose between real and fake things to test your powers of acuity and perception. Though the face-generating AI offers occasional duds (such as the one embedded above), I quickly realized that the reason I beat it is because I've become familiar with the telltale indicators of a neural-net creator: the odd warping patterns in the backgrounds, the peculiar inconsistencies, the particular pixelated patterns that crop up again and again. It would often have fooled me a decade ago, just as the first phonograph, hidden behind a curtain, fooled an entire audience in the 1880s.

Read the rest

Rage Inside the Machine: an insightful, brilliant critique of AI's computer science, sociology, philosophy and economics

[I ran a review of this in June when the UK edition came out -- this review coincides with the US edition's publication]

Rob Smith is an eminent computer scientist and machine learning pioneer whose work on genetic algorithms has been influential in both industry and the academy; now, in his first book for a general audience, Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All, Smith expertly draws connections between AI, neoliberalism, human bias, eugenics and far-right populism, and shows how the biases of computer science and the corporate paymasters have distorted our whole society. Read the rest

AI Drama: Google DeepMind's Mustafa Suleyman abruptly placed on leave, no reason given for departure

Google DeepMind co-founder Mustafa Suleyman has been placed on leave from the applied artificial intelligence lab he ran. Read the rest

Uganda installs Huawei's AI-powered facial recognition surveillance system 'nationwide'

“The cameras are already transforming modern day policing in Uganda, with facial recognition and artificial intelligence as part of policing and security.” — Ugandan Police.

OpenAI releases larger GPT-2 dataset. Can it write fake news better than a human?

OpenAI has released a more extensive version of its generative language model.

We’re releasing the 774 million parameter GPT-2 language model after the release of our small 124M model in February ...

2. Humans can be convinced by synthetic text. Research from our research partners Sarah Kreps and Miles McCain at Cornell published in Foreign Affairs says people find GPT-2 synthetic text samples almost as convincing (72% in one cohort judged the articles to be credible) as real articles from the New York Times (83%). Additionally, research from AI2/UW has shown that news written by a system called “GROVER” can be more plausible than human-written propaganda. These research results make us generally more cautious about releasing language models

Blockquoted below is something I just had it make (using Talk to Transformer, which has been updated with the new dataset.)

I wrote the first (bolded) paragraph. GPT-2 wrote the rest.

Former Democratic presidential candidate and United States Senator Hillary Clinton was arrested today and charged on four counts of conspiracy, one count of fraud, and one count of lying to Federal investigators.

The details of the case are detailed below.

A Brief Overview of the Case

On June 2, 2014, Clinton (pictured) admitted to FBI agents that, on June 23, 2013, she, and others, had conspired with other political figures to take "official action" in response to a series of negative articles which she wrote in the Washington Times and other outlets.

The following is a summary of Clinton's admission:

Secretary Clinton used the Washington Post as her de facto personal email account and for the official State Department email account.

Read the rest

More posts