OpenAI has released a more extensive version of its generative language model.
We're releasing the 774 million parameter GPT-2 language model after the release of our small 124M model in February …
2. Humans can be convinced by synthetic text. Research from our research partners Sarah Kreps and Miles McCain at Cornell published in Foreign Affairs says people find GPT-2 synthetic text samples almost as convincing (72% in one cohort judged the articles to be credible) as real articles from the New York Times (83%). Additionally, research from AI2/UW has shown that news written by a system called "GROVER" can be more plausible than human-written propaganda. These research results make us generally more cautious about releasing language models
Blockquoted below is something I just had it make (using Talk to Transformer, which has been updated with the new dataset.)
I wrote the first (bolded) paragraph. GPT-2 wrote the rest.
Former Democratic presidential candidate and United States Senator Hillary Clinton was arrested today and charged on four counts of conspiracy, one count of fraud, and one count of lying to Federal investigators.
The details of the case are detailed below.
A Brief Overview of the Case
On June 2, 2014, Clinton (pictured) admitted to FBI agents that, on June 23, 2013, she, and others, had conspired with other political figures to take "official action" in response to a series of negative articles which she wrote in the Washington Times and other outlets.
The following is a summary of Clinton's admission:
Secretary Clinton used the Washington Post as her de facto personal email account and for the official State Department email account. It is important to note that the Federal Records Act (FRA) requires, as a condition for releasing government records, that the subject matter of the records be protected from public disclosure. The State Department records in question pertained to public and official business conducted in the United States Government. The FBI's investigation revealed that on July 2, 2009, Secretary Clinton sent
Hillary Clinton of The Washington Times! GPT-2 has a better sense of humor than any fake news I ever read.
This is amazing generative prose. But it's still not as good as anything even the dumbest human grifters can and do churn out. No-one familiar with newswriting tropes would be convinced for a second, and I suspect that even the legendary Facebook Boomers are harder to fool than the hype suggests.
The danger of GPT-2 is not fake news as sharable, influential longform but as social media chum in brief: imagine a flood of more convincing bots making human actions harder to measure, goosing the "engagement" of emotionally vulnerable users while deflating the value of those interactions. This is the more straightforward "problem" presented by advanced machine bullshit.
Another way of putting it: generative writing is not threatening further political anguish. It's threatening the advertising value of social media platforms that exploit it.
Nevertheless, GPT-2 is like nothing else in machine intelligence. Here I fed it a paragraph of condensed Cormac McCarthy, which it elaborates upon perfectly in-voice, but slowly segueing to a Hallmark movie: