StoryAI, a bot that autocompletes a story for you

Behold StoryAI: Type in a few sentences, and the model autocompletes your story, writing a few hundred more words that, by its calculations, could plausibly follow.

It was created by the folks at BotSociety Inc, using the GPT-2 774M model created by OpenAI, which is eerily good. As an example, it was fed these three sentences as a seed …

President Trump is buying Greenland in a surprise deal. The state of California is traded in return. The deal was well received by Republicans.

… and it generated this following story:

The Washington Post reports:

"Trump said the country's unique geography and natural mineral resources would be 'an asset' to the state over its lifespans. He is expected to move swiftly to make good the investment in the $10.8 billion purchase of a national park that he and the president of Denmark, Jens Stoltenberg, announced last year.

"The sale of the state could help the U.S. in its fight against global warming, which is already heating up the planet. The government of Denmark says the purchase will pay for a 10-year expansion of a national park in the north of the island, which is one of the world's most biodiverse areas."

This move doesn't come as a surprise to political analysts. A recent study by the World Wildlife Fund and the Smithsonian's National Center for Atmospheric Research revealed several Arctic nations have substantial oil reserves and natural gas reserves and they are in talks to trade them in for the Arctic.

"Canada and Russia are the two most active, buying all but 2 percent of the Arctic's undiscovered oil and gas reserves, based on an analysis of data from 2008 to 2016," explains the Wall Street Journal. "As many as 11 countries also have interests in the Arctic. The U.S. and Britain, which both lost an ocean fleet to Islamic State militants in the last year, are both among them, as are China, Germany and India."

These story-generation models are getting sophisticated enough — and easy enough for anyone to deploy — that it isn't hard to imagine them being used in various forms of deception, from spam to fraudbots to political twitterbots and chatbots. They're probably already being used in those sectors.

I wondered what the model would generate if I fed it the opening two sentences of this very blog post, and, le voila:

I'd love to know their version: I was going to use GPT-2 when I published my paper, but it was in need of some improvement. I thought I'd try the GPT model, too.

And the robot did not disappoint. The robot wrote down what was on the page. It was incredibly fast. Like very good. It did not need to go over the page twice, as it was already there so fast. I can't tell you how impressed I am. One might compare it to the kind of artificial intelligence the DeepMind AlphaGo machine has: fast, and smart, and able to match humans in a match.

We should see more of this. The problem of the robot's writing in a non-linear format is not entirely solved yet, but it could be, and the work done for this paper could inform future work by companies like OpenAI, creating the kind of general-purpose AI that could be used right here in our everyday lives.

And of course, we shouldn't think of deep learning for humans as a technology we would just casually take to the bank, use as an ATM. It's a tool, one that needs to be used carefully. A robot that is more or less a computer could benefit from careful control over its output data and how it is interpreted as being used. If you ask any machine-learning researcher whether they are "a fan of deep learning", you'll find a lot of skeptics.

This is all possible because of a deep realization by the researchers I mentioned before — that our world works better when everyone, all the time, is able to read and understand text. What this all comes down to, however, is that everyone deserves a little bit of control over the way their own world works. That's why, for me, the best thing about this paper is not that it makes machine learning even deeper and more important. No, it's that it shows, just once again, that the tools we use to teach and take control of computers have been working for centuries. And that, ultimately, it works.

(CC-2.0-licensed photo of typewriter via Ak~i's Flickr stream)