DICK-RNN is a recurrent neural network trained with 10,000 doodles of dicks. You start drawing a shape and the AI tries its best to finish it as a penis. Starting with the balls helps a lot.
Simon Niklaus, Long Mai, Jimei Yang, and Feng Liu developed software that gives a 3D Ken Burns effect to still images.
From their paper:
Experiments with a wide variety of image content show that our method enables realistic synthesis results. Our study demonstrates that our system allows users to achieve better results while requiring little effort compared to existing solutions for the 3D Ken Burns effect creation.
The latest episode of Two Minute Papers discusses a new video enhancement method called "Depth-Aware Video Frame Interpolation" to increase the frame rate of choppy videos. The breakthrough here is the way this neural network smoothly handles objects that appear from behind other objects.
Image: YouTube/Two Minute Papers Read the rest
From the excellent "Two Minute Papers" YouTube channel, a discussion of a paper titled "DeepRemaster: Temporal Source-Reference Attention Networks for Comprehensive Video Enhancement," that demonstrates the results of a neural network that fixes and colorizes aged, blurry, scratchy films. Read the rest
Donald Trump’s desperate PR attempt “speech” from this morning read kind of like the lyric sheet from some obscure Radiohead-wannabe art-rock band.
So naturally, comedian Emily Heller took it to the next level by feeding it into OpenAI’s “Talk To Transformer” Neural Network.
— Emily Heller (@MrEmilyHeller) November 20, 2019
I assumed this was real, but I still wanted to try it out for myself. And wow, the results were not disappointing.
First up: the opening scene to a noir novel. Not quite “shooting someone in the middle of Fifth Avenue and not losing any supporters”-level, but it’s close.
So I tried again. And this time, the Neural Network came out with a straight-up Ramones song.
On my third try, the Artificial Intelligence showed a greater capacity for aspiring towards human emotions than Trump himself ever has.
And this one basically just sounds like Trump talking in his sleep during a wet dream.
As we can see from below, even the Neural Network knows that dozens of women have credibly accused the President of sexual assault.
And finally, this one that … actually just sounds like the way Trump talks on a good day.
You can try to fill-in-the-blanks on your own with OpenAI’s Talk To Transformer neural network. Whatever it is the machine spews out will still probably be more coherent and articulate than anything Trump himself has ever come up with. Read the rest
Robbie Barrat is generating warped, surreal paintings using artificial intelligence and the results are really something.
Usually the machine just paints people as blobs of flesh with tendrils and limbs randomly growing out - I think it's really surreal. I wonder if that's how machines see us...
Here's Bonnie Burton in CNET:
The results are surreal. Barrat posted many of the final pieces of artwork -- which can only be described as surreal, blobby, swirly naked women -- on Twitter. It's almost like a very intoxicated Salvador Dali and a dizzy Picasso joined forces to make art. ...Barrat's AI-assisted artwork isn't exactly sensual. In fact, most of the nudes look like they are melting on a very hot day.
"The way that it paints faces makes me uncomfortable. It always paints them as like, purple and yellow globs -- that isn't in the training set so I'm actually still not sure why it does that.
I don't like looking at those heads, I really don't.
Read the rest
I used a program called a character-level recurrent neural network (char-rnn), which looks at examples of text (Pokemon, or Harry Potter fan fiction, or even guinea pig names) and learns to imitate them. I gave the neural network more than 1,500 names from My Little Pony Friendship is Magic Wiki, and let it start learning.
The AI paint name generator (previously) has refined its preferences. Though still very bad at naming paint colors, there seems to be (to my mind) an emerging personality, one that has beliefs and, perhaps, opinions about its creators.
Pictured at the top of this post, for reference, is the human-named classic Opaque Couché.
Jacques Mattheij hoped to make some cash buying cheap boxes of used, unsorted Lego that he'd organize into more valuable assortments for resale. After acquiring two metric tons of bricks, he was motivated to build a technological solution for sorting. He outfitted a conveyor belt with a cheap magnifying USB camera and employed air nozzles to blow the bricks into various bins. The bigger challenge though was how to get the PC to identify the bricks. From IEEE Spectrum:
Read the rest
After a few other failed approaches, and six months in, I decided to try out a neural network. I settled on using TensorFlow, an immense library produced by the Google Brain Team. TensorFlow can run on a CPU, but for a huge speed increase I tapped the parallel computing power of the graphics processing unit in my US $700 GTX1080 Ti Nvidia video card....
...I managed to label a starter set of about 500 assorted scanned pieces. Using those parts to train the net, the next day the machine sorted 2,000 more parts. About half of those were wrongly labeled, which I corrected. The resulting 2,500 parts were the basis for the next round of training. Another 4,000 parts went through the machine, 90 percent of which were labeled correctly! So, I had to correct only some 400 parts. By the end of two weeks I had a training data set of 20,000 correctly labeled images...
Once the software is able to reliably classify across the entire range of parts in my garage, I’ll be pushing through the remainder of those two tons of bricks.
Neural networks, it is said, cannot explain their decisions. Which is probably a good thing, at least when it comes to the machine mind's ideas for new Dungeons & Dragons spells, as guided by Janelle Shane. [via Patrick Ziselberger]
It’s a really small dataset, actually - so small that in almost no time at all, it learned to reproduce the original input data verbatim, in order. But by setting the “temperature” flag to a really high value (i.e. it has a higher chance of NOT going with its best guess for the next character in the phrase), I can at least induce spelling mistakes. Then the neural network has to try to recover from these, with often entertaining results.
Moss Healing Word Hold Mouse Barking Sphere Heat on Farm True Steake Finger of Enftebtemang Fomend’s Beating Sphere
@Smutclyde Google Translated sequences of unicode characters and short pairings, at varying lengths, to see what the neural networks would interpret each as. The results are remarkable. Lovecraftian wailings, for example, become homoerotic death metal lyrics.
And is this not as disturbing as it is funny? Especially when you consider that the machine minds are learning their way beyond our comprehension.
In her spare time, University of California, San Diego engineer Janelle Shane trained a neural network to generate recipes for new dishes. Informed by its reading of existing recipes, the neural network did improve over time yet it's clearly not quite ready for Iron Chef. Here are two recipes from her Tumblr, Postcards from the Frontiers of Science:
Read the rest
Pears Or To Garnestmeam
¼ lb bones or fresh bread; optional½ cup flour1 teaspoon vinegar¼ teaspoon lime juice2 eggs
Brown salmon in oil. Add creamed meat and another deep mixture.
Discard filets. Discard head and turn into a nonstick spice. Pour 4 eggs onto clean a thin fat to sink halves.
Brush each with roast and refrigerate. Lay tart in deep baking dish in chipec sweet body; cut oof with crosswise and onions. Remove peas and place in a 4-dgg serving. Cover lightly with plastic wrap. Chill in refrigerator until casseroles are tender and ridges done. Serve immediately in sugar may be added 2 handles overginger or with boiling water until very cracker pudding is hot.
Yield: 4 servings
This is from a network that’s been trained for a relatively long time - starting from a complete unawareness of whether it’s looking at prose or code, English or Spanish, etc, it’s already got a lot of the vocabulary and structure worked out.
This is particularly impressive given that it has the memory of a goldfish - it can only analyze 65 characters at a time, so by the time it begins the instructions, the recipe title has already passed out of its memory, and it has to guess what it’s making.
Robbie Barrat is president and founder of their high school computer science club; they created Rapper-Neural-Network, a free software project that uses machine learning trained on a corpus of 6,000 Kanye West lines to autogenerate new rap songs. Read the rest
It's not bad. In fact, this is a triumph: a Christmas song written entirely by an artificial intelligence at the University of Toronto. Yet it has that uncanny neural network je ne sais quoi in spades.
Read the rest
I swear it’s Christmas Eve I hope that’s what you say The best Christmas present in the world is a blessing I’ve always been there for the rest of our lives.
Reed Morgan Milewicz, a programmer and computer science researcher, may be the first person to teach an AI to do Magic, literally. Milewicz wowed a popular online MTG forum—as well as hacker forums like Y Combinator’s Hacker News and Reddit—when he posted the results of an experiment to “teach” a weak AI to auto-generate Magic cards. He shared a number of the bizarre “cards” his program had come up with, replete with their properly fantastical names (“Shring the Artist,” “Mided Hied Parira's Scepter”) and freshly invented abilities (“fuseback”). Players devoured the results.
Before training my own dreaming network, I'll need to choose a network layout that suits my needs. In order to learn about the strengths and weaknesses of different layouts, I've run the same guided dreaming tour with four different Imagenet-pretrained models: GoogLeNet, VGG CNN-F, VGG CNN-S and Network-in Network Imagenet model (all available via Caffe model zoo).
The interframe processing is the same for all except NIN which is keen to hallucinate very bright saturated spots, so I decided to couple it with a desaturation filter which effectively produces a gray background. Most of the artifacts you are likely to see stem from the cumulative nature of the interframe processing (not from compression).