OpenAI releases larger GPT-2 dataset. Can it write fake news better than a human?

OpenAI has released a more extensive version of its generative language model.

We're releasing the 774 million parameter GPT-2 language model after the release of our small 124M model in February …

2. Humans can be convinced by synthetic text. Research from our research partners Sarah Kreps and Miles McCain at Cornell published in Foreign Affairs says people find GPT-2 synthetic text samples almost as convincing (72% in one cohort judged the articles to be credible) as real articles from the New York Times (83%).

Read the rest

Computerphile explains the fascinating AI storyteller, GPT-2

GPT-2 is a language model that was trained on 40GB of text scraped from websites that Reddit linked to and that had a Karma score of at least two.  As the developers at OpenAI describe it, GPT-2 is "a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training." — Read the rest

The metaverse is no more

The sprawling, nebulous 'Metaverse' has been completely abandoned by its parent company, Meta, only a few years after its initial inception. Unsurprisingly, "VR Chat but with capitalism" simply wasn't an appealing pitch for the vast majority of consumers, and the companies that did make a bet on their own virtual space found them expensive, far too niche, and ultimately inferior to a traditional, flat website in every way – not to mention the layoffs that resulted once they realized this. — Read the rest

This neural net generates bizarre music with vocals of famous singers

Open AI, the same organization that created the GPT-2 language model (try it here) which generates coherent stories from a text prompt, just released a new application called Jukebox, "a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles." — Read the rest

Cards Against Humanity's Thanksgiving livestream pits a machine learning model against human joke writers

Cards Against Humanity asked Spencer Kelly to teach a computer to write mean, funny joke-cards for a new, AI-based expansion pack to the game; Kelly trained the popular GPT-2 generative language model (previously) on existing cards, and now the company is livestreaming a 16-hour competition between its AI and its human joke-writers, with a voting system to up/downvote the resulting jokes (at the end of the day, these votes will be "tallied up and thrown in the garbage"). — Read the rest