Robbie Barrat is generating warped, surreal paintings using artificial intelligence and the results are really something.
Usually the machine just paints people as blobs of flesh with tendrils and limbs randomly growing out - I think it's really surreal. I wonder if that's how machines see us...
Here's Bonnie Burton in CNET:
The results are surreal. Barrat posted many of the final pieces of artwork -- which can only be described as surreal, blobby, swirly naked women -- on Twitter. It's almost like a very intoxicated Salvador Dali and a dizzy Picasso joined forces to make art. ...Barrat's AI-assisted artwork isn't exactly sensual. In fact, most of the nudes look like they are melting on a very hot day.
"The way that it paints faces makes me uncomfortable. It always paints them as like, purple and yellow globs -- that isn't in the training set so I'm actually still not sure why it does that.
I don't like looking at those heads, I really don't.
Read the rest
In her delightful blog AI Weirdness, Janelle Shane entered 18,458 unique bills introduced in Massachusetts into a neural network, which then created some rather hilarious bills, including: Read the rest
Janelle Shane trained a neural network on the names and attributes of My Little Ponies, then shared "some of the worst ones."
Read the rest
I used a program called a character-level recurrent neural network (char-rnn), which looks at examples of text (Pokemon, or Harry Potter fan fiction, or even guinea pig names) and learns to imitate them. I gave the neural network more than 1,500 names from My Little Pony Friendship is Magic Wiki, and let it start learning.
The AI paint name generator (previously) has refined its preferences. Though still very bad at naming paint colors, there seems to be (to my mind) an emerging personality, one that has beliefs and, perhaps, opinions about its creators.
Pictured at the top of this post, for reference, is the human-named classic Opaque Couché.
Latest experiments reveal AI is still terrible at naming paint colors [Ars Technica] Read the rest
Jacques Mattheij hoped to make some cash buying cheap boxes of used, unsorted Lego that he'd organize into more valuable assortments for resale. After acquiring two metric tons of bricks, he was motivated to build a technological solution for sorting. He outfitted a conveyor belt with a cheap magnifying USB camera and employed air nozzles to blow the bricks into various bins. The bigger challenge though was how to get the PC to identify the bricks. From IEEE Spectrum:
Read the rest
After a few other failed approaches, and six months in, I decided to try out a neural network. I settled on using TensorFlow, an immense library produced by the Google Brain Team. TensorFlow can run on a CPU, but for a huge speed increase I tapped the parallel computing power of the graphics processing unit in my US $700 GTX1080 Ti Nvidia video card....
...I managed to label a starter set of about 500 assorted scanned pieces. Using those parts to train the net, the next day the machine sorted 2,000 more parts. About half of those were wrongly labeled, which I corrected. The resulting 2,500 parts were the basis for the next round of training. Another 4,000 parts went through the machine, 90 percent of which were labeled correctly! So, I had to correct only some 400 parts. By the end of two weeks I had a training data set of 20,000 correctly labeled images...
Once the software is able to reliably classify across the entire range of parts in my garage, I’ll be pushing through the remainder of those two tons of bricks.
Neural networks, it is said, cannot explain their decisions. Which is probably a good thing, at least when it comes to the machine mind's ideas for new Dungeons & Dragons spells, as guided by
Janelle Shane. [via Patrick Ziselberger]
It’s a really small dataset, actually - so small that in almost no time at all, it learned to reproduce the original input data verbatim, in order. But by setting the “temperature” flag to a really high value (i.e. it has a higher chance of NOT going with its best guess for the next character in the phrase), I can at least induce spelling mistakes. Then the neural network has to try to recover from these, with often entertaining results.
Moss Healing Word
Heat on Farm
Finger of Enftebtemang
Fomend’s Beating Sphere
For the best one you'll have to click through. Read the rest
@Smutclyde Google Translated sequences of unicode characters and short pairings, at varying lengths, to see what the neural networks would interpret each as. The results are remarkable. Lovecraftian wailings, for example, become homoerotic death metal lyrics.
And is this not as disturbing as it is funny? Especially when you consider that the machine minds are learning their way beyond our comprehension.
Read the rest
In her spare time, University of California, San Diego engineer Janelle Shane trained a neural network to generate recipes for new dishes. Informed by its reading of existing recipes, the neural network did improve over time yet it's clearly not quite ready for Iron Chef. Here are two recipes from her Tumblr, Postcards from the Frontiers of Science:
Read the rest
Pears Or To Garnestmeam
¼ lb bones or fresh bread; optional½ cup flour1 teaspoon vinegar¼ teaspoon lime juice2 eggs
Brown salmon in oil. Add creamed meat and another deep mixture.
Discard filets. Discard head and turn into a nonstick spice. Pour 4 eggs onto clean a thin fat to sink halves.
Brush each with roast and refrigerate. Lay tart in deep baking dish in chipec sweet body; cut oof with crosswise and onions. Remove peas and place in a 4-dgg serving. Cover lightly with plastic wrap. Chill in refrigerator until casseroles are tender and ridges done. Serve immediately in sugar may be added 2 handles overginger or with boiling water until very cracker pudding is hot.
Yield: 4 servings
This is from a network that’s been trained for a relatively long time - starting from a complete unawareness of whether it’s looking at prose or code, English or Spanish, etc, it’s already got a lot of the vocabulary and structure worked out.
This is particularly impressive given that it has the memory of a goldfish - it can only analyze 65 characters at a time, so by the time it begins the instructions, the recipe title has already passed out of its memory, and it has to guess what it’s making.
Robbie Barrat is president and founder of their high school computer science club; they created Rapper-Neural-Network, a free software project that uses machine learning trained on a corpus of 6,000 Kanye West lines to autogenerate new rap songs. Read the rest
It's not bad. In fact, this is a triumph: a Christmas song written entirely by an artificial intelligence at the University of Toronto. Yet it has that uncanny neural network je ne sais quoi in spades.
Read the rest
I swear it’s Christmas Eve
I hope that’s what you say
The best Christmas present in the world is a blessing
I’ve always been there for the rest of our lives.
@RoboRosewater is a twitter account that posts, once a day, a Magic: The Gathering card generated by a recurrent neural network. [via Ditto]
This is an implementation of the science described by Vice's Brian Merchant in this article.
Reed Morgan Milewicz, a programmer and computer science researcher, may be the first person to teach an AI to do Magic, literally. Milewicz wowed a popular online MTG forum—as well as hacker forums like Y Combinator’s Hacker News and Reddit—when he posted the results of an experiment to “teach” a weak AI to auto-generate Magic cards. He shared a number of the bizarre “cards” his program had come up with, replete with their properly fantastical names (“Shring the Artist,” “Mided Hied Parira's Scepter”) and freshly invented abilities (“fuseback”). Players devoured the results.
Here's the code, and here's a simple text-only generator.
Magic: The Gathering is Turing-complete. Read the rest
Ville-Matias Heikkilä posted four trips to YouTube, each the work of a different "DeepDreaming"-style neural network (based on the Caffe deep learning framework) but the same source image.
Before training my own dreaming network, I'll need to choose a network layout that suits my needs. In order to learn about the strengths and weaknesses of different layouts, I've run the same guided dreaming tour with four different Imagenet-pretrained models: GoogLeNet, VGG CNN-F, VGG CNN-S and Network-in Network Imagenet model (all available via Caffe model zoo).
The interframe processing is the same for all except NIN which is keen to hallucinate very bright saturated spots, so I decided to couple it with a desaturation filter which effectively produces a gray background. Most of the artifacts you are likely to see stem from the cumulative nature of the interframe processing (not from compression).
[via Hacker News.] Read the rest