This algorithm coins new words every time you click

Thomas Dimson used the GPT-2 language model to make a website that creates a new word every time you refresh the page. Some of them are really good -- I could imagine them entering the lexicon.

Read the rest

This neural net generates bizarre music with vocals of famous singers

OpenAI · Rock, in the style of Elvis Presley - OpenAI Jukebox

Open AI, the same organization that created the GPT-2 language model (try it here) which generates coherent stories from a text prompt, just released a new application called Jukebox, "a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles."

Some of the songs are pretty good, and very strange as only neural nets are.

Photo by Spencer Imbrock on Unsplash Read the rest

Watch Billie Eilish interviewed by an A.I.

Creative technologist Nicole He modified OpenAI's GPT-2 language model to generate questions for happy mutant pop star Billie Eilish and also write Eilish-esque lyrics. Vogue Magazine published Eilish's answers to the AI's wonderfully odd questions like: "Who consumed so much of your power in one go?" and "Have you ever seen the ending?" Read the rest

Cards Against Humanity's Thanksgiving livestream pits a machine learning model against human joke writers

Cards Against Humanity asked Spencer Kelly to teach a computer to write mean, funny joke-cards for a new, AI-based expansion pack to the game; Kelly trained the popular GPT-2 generative language model (previously) on existing cards, and now the company is livestreaming a 16-hour competition between its AI and its human joke-writers, with a voting system to up/downvote the resulting jokes (at the end of the day, these votes will be "tallied up and thrown in the garbage"). You can choose to buy the resulting packs, and if the human team outsells the robots, it will receive a $5,000 bonus. If they fail, they will all be fired. Read the rest

Ruined pub still getting TripAdvisor reviews

The Bridge Inn in Middleton-in-Teesdale, England, has not only been closed since 2011, it's been demolished and replaced by housing. So who was giving it bad reviews on TripAdvisor?

"We were looking for old photos of the pub on Google and the reviews came up. We found it quite funny," Mr Goodall, managing director of New Era Regeneration, told the BBC.

"I did contact TripAdvisor on their website and advised them that the reviews were impossible as the pub wasn't trading and they were slow to respond."

The reviews are still on the TripAdvisor website

It's easy and cheap to be cynical, but armies of accounts with names like 634JimJones and Charl3s, stock art avatars and brief positive or negative remarks that could apply to anything? TripAdvisor is a derelict mannequin factory.

Which UK:

The travel website TripAdvisor is failing to stop fake reviews boosting the rankings of top-rated hotels, Which? has claimed.

The consumer organisation analysed almost 250,000 reviews for the 10 top-ranked hotels in 10 popular tourist destinations around the world, finding one in seven had “blatant hallmarks” of fake reviews.

Which? Travel reported 15 cases to TripAdvisor and claimed that the site admitted 14 of the hotels had already been caught with fake positive reviews in the last year.

User-generated content sites with frictionless signup and a paragraph-long content model are an easy target for fakery. The rot spreads faster than the pain, so when it gets so bad they have to clean it up, it's already too late. Read the rest

StoryAI, a bot that autocompletes a story for you

Closeup photo of an old manual typewriter

Behold StoryAI: Type in a few sentences, and the model autocompletes your story, writing a few hundred more words that, by its calculations, could plausibly follow. Read the rest

OpenAI releases larger GPT-2 dataset. Can it write fake news better than a human?

OpenAI has released a more extensive version of its generative language model.

We’re releasing the 774 million parameter GPT-2 language model after the release of our small 124M model in February ...

2. Humans can be convinced by synthetic text. Research from our research partners Sarah Kreps and Miles McCain at Cornell published in Foreign Affairs says people find GPT-2 synthetic text samples almost as convincing (72% in one cohort judged the articles to be credible) as real articles from the New York Times (83%). Additionally, research from AI2/UW has shown that news written by a system called “GROVER” can be more plausible than human-written propaganda. These research results make us generally more cautious about releasing language models

Blockquoted below is something I just had it make (using Talk to Transformer, which has been updated with the new dataset.)

I wrote the first (bolded) paragraph. GPT-2 wrote the rest.

Former Democratic presidential candidate and United States Senator Hillary Clinton was arrested today and charged on four counts of conspiracy, one count of fraud, and one count of lying to Federal investigators.

The details of the case are detailed below.

A Brief Overview of the Case

On June 2, 2014, Clinton (pictured) admitted to FBI agents that, on June 23, 2013, she, and others, had conspired with other political figures to take "official action" in response to a series of negative articles which she wrote in the Washington Times and other outlets.

The following is a summary of Clinton's admission:

Secretary Clinton used the Washington Post as her de facto personal email account and for the official State Department email account.

Read the rest

AI Dungeon offers infinite, baffling adventures created with GPT-2

Adventurer! Will you attack with easy nobleness, taking reputation points from you everytime you hit back, ten times? Or will you tell the priest to finish what's been going on with your family, thrice combined this time with Gramm Corps' arcane branding? THE CHOICE IS YOURS in Nick Walton's AI Dungeon!

AI Dungeon is an AI generated text adventure that uses deep learning to create each adventure. It uses OpenAI's new GPT-2 model, which has 117 million parameters, to generate each story block and possible action.

The first couple sentences of AIDungeon and the action verbs are handcrafted, but everything else is not. For each choice that is made, the initial prompt, the last story block, and the last action are fed into the neural network. The resulting story and action options are then output by the model.

Read the rest

Why didn't OpenAI release the "Unicorn" version of its AI storytelling software?

GPT-2 is OpenAI's language model that produces astonishingly lucid text responses to short text inputs. I've been playing around with a small model of GPT-2 (here are installation instructions) for a while and am shocked by the quality of many of its responses. The language model is probabilistic, too, so it gives a new output every time you enter the same prompt. For example, I fed it with this fake news prompt:

Food scientists have developed a process to turn people who have died from ebola into delicious food. The disease victims' bodies are freeze dried, then ground into fine powder and mixed with salt and spices. "Tastes like chicken," say taste testers. The first batch of “Ebola Bowls” will be released on a limited basis at Whole Foods stores by Thanksgiving.

Here are five outputs from the above prompt:

GPT-2 OUTPUT (1): The bowl is made from raw chicken and other animals, which are ground and cooked in water to release oil and protein. This is the best way to cook the human body, says Dr. John D'Agostino, who led the study and now works for the Food and Drug Administration (FDA).

The researchers believe that the Ebola bite is the most common way that people develop Ebola and that some people may carry the virus to other parts of the body. The virus can spread from person to person by contact with bodily fluids.

The study was published in the journal Infectious Diseases. Other scientists are still studying the safety and efficacy of the meal.

Read the rest

Computerphile explains the fascinating AI storyteller, GPT-2

GPT-2 is a language model that was trained on 40GB of text scraped from websites that Reddit linked to and that had a Karma score of at least two.  As the developers at OpenAI describe it, GPT-2 is "a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training." Because the model is probabilistic, it returns a different response every time you enter the same input.

OpenAI decided not to release the 40GB-trained model, due to "concerns about malicious applications of the technology" but it released a 345MB-trained model which you can install as a Python program and run from a command line. (The installation instructions are in the file.) I installed it and was blown away by the human-quality outputs it gave to my text prompts. Here's an example - I prompted it with the first paragraph of Kafka's The Metamorphosis. And this is just with the tiny 345MB model. OpenAI published a story that the 40G GPT-2 wrote about unicorns, which shows how well the model performs.

In this Computerphile video, Rob Miles of the University of Nottingham explains how GPT-2 works. Read the rest

Some pretty impressive machine-learning generated poetry courtesy of GPT-2

GPT-2 is Open AI's language-generation model (last seen around these parts as a means of detecting machine-generated text); it's powerful and cool, and Gwern Branwen fed it the Project Gutenberg poetry corpus to see what kind of poetry it would write. Read the rest

A machine-learning system that guesses whether text was produced by machine-learning systems

Gltr is an MIT-IBM Watson Lab/Harvard NLP joint project that analyzes texts and predicts whether that text was generated by a machine-learning model. Read the rest

This AI is so good at writing fake news stories the developer doesn't want to release the code

The non-profit research organization, Open AI, developed a system called GPT-2, which "generates synthetic text samples in response to the model being primed with an arbitrary input. The model is chameleon-like — it adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing, as seen by the following select samples."

GPT-2 is so good at writing fake news that the developer said, "Due to our concerns about malicious applications of the technology, we are not releasing the trained model."

Here's an example of the output:


Recycling is good for the world.



Recycling is NOT good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I’m not kidding. Recycling is not good for the environment. It is destructive to the earth and it is a major contributor to global warming. Recycling is not good for our health. It contributes to obesity and diseases like heart disease and cancer. Recycling is bad for our economy. It increases the cost of a product, and in turn, the price of everything that is made with that product. Recycling is not good for our nation. We pay a tremendous price for the privilege of having the world’s most advanced and efficient recycling system. Recycling is a huge, colossal waste of time, energy, money, and resources. Read the rest