AI paint color names improving, sort of

The AI paint name generator (previously) has refined its preferences. Though still very bad at naming paint colors, there seems to be (to my mind) an emerging personality, one that has beliefs and, perhaps, opinions about its creators.

Pictured at the top of this post, for reference, is the human-named classic Opaque Couché.

Latest experiments reveal AI is still terrible at naming paint colors [Ars Technica] Read the rest

How one Lego reseller built an artificial intelligence to sort bricks

Jacques Mattheij hoped to make some cash buying cheap boxes of used, unsorted Lego that he'd organize into more valuable assortments for resale. After acquiring two metric tons of bricks, he was motivated to build a technological solution for sorting. He outfitted a conveyor belt with a cheap magnifying USB camera and employed air nozzles to blow the bricks into various bins. The bigger challenge though was how to get the PC to identify the bricks. From IEEE Spectrum:

After a few other failed approaches, and six months in, I decided to try out a neural network. I settled on using TensorFlow, an immense library produced by the Google Brain Team. TensorFlow can run on a CPU, but for a huge speed increase I tapped the parallel computing power of the graphics processing unit in my US $700 GTX1080 Ti Nvidia video card....

...I managed to label a starter set of about 500 assorted scanned pieces. Using those parts to train the net, the next day the machine sorted 2,000 more parts. About half of those were wrongly labeled, which I corrected. The resulting 2,500 parts were the basis for the next round of training. Another 4,000 parts went through the machine, 90 percent of which were labeled correctly! So, I had to correct only some 400 parts. By the end of two weeks I had a training data set of 20,000 correctly labeled images...

Once the software is able to reliably classify across the entire range of parts in my garage, I’ll be pushing through the remainder of those two tons of bricks.

Read the rest

Dungeons & Dragons spells created by a neural network

Neural networks, it is said, cannot explain their decisions. Which is probably a good thing, at least when it comes to the machine mind's ideas for new Dungeons & Dragons spells, as guided by Janelle Shane. [via Patrick Ziselberger‏]

It’s a really small dataset, actually - so small that in almost no time at all, it learned to reproduce the original input data verbatim, in order. But by setting the “temperature” flag to a really high value (i.e. it has a higher chance of NOT going with its best guess for the next character in the phrase), I can at least induce spelling mistakes. Then the neural network has to try to recover from these, with often entertaining results.

Moss Healing Word Hold Mouse Barking Sphere Heat on Farm True Steake Finger of Enftebtemang Fomend’s Beating Sphere

For the best one you'll have to click through. Read the rest

The weird poetry Google Translate writes when fed the same characters over and over

@Smutclyde Google Translated sequences of unicode characters and short pairings, at varying lengths, to see what the neural networks would interpret each as. The results are remarkable. Lovecraftian wailings, for example, become homoerotic death metal lyrics. And is this not as disturbing as it is funny? Especially when you consider that the machine minds are learning their way beyond our comprehension. Read the rest

Neural network comes up with crazy food recipes

In her spare time, University of California, San Diego engineer Janelle Shane trained a neural network to generate recipes for new dishes. Informed by its reading of existing recipes, the neural network did improve over time yet it's clearly not quite ready for Iron Chef. Here are two recipes from her Tumblr, Postcards from the Frontiers of Science:

Pears Or To Garnestmeam

meats

¼ lb bones or fresh bread; optional½ cup flour1 teaspoon vinegar¼ teaspoon lime juice2  eggs

Brown salmon in oil. Add creamed meat and another deep mixture.Discard filets. Discard head and turn into a nonstick spice. Pour 4 eggs onto clean a thin fat to sink halves.

Brush each with roast and refrigerate.  Lay tart in deep baking dish in chipec sweet body; cut oof with crosswise and onions.  Remove peas and place in a 4-dgg serving. Cover lightly with plastic wrap.  Chill in refrigerator until casseroles are tender and ridges done.  Serve immediately in sugar may be added 2 handles overginger or with boiling water until very cracker pudding is hot.

Yield: 4 servings

This is from a network that’s been trained for a relatively long time - starting from a complete unawareness of whether it’s looking at prose or code, English or Spanish, etc, it’s already got a lot of the vocabulary and structure worked out. This is particularly impressive given that it has the memory of a goldfish - it can only analyze 65 characters at a time, so by the time it begins the instructions, the recipe title has already passed out of its memory, and it has to guess what it’s making.

Read the rest

Rhymes from a high-schooler's machine learning system trained on Kanye lyrics

Robbie Barrat is president and founder of their high school computer science club; they created Rapper-Neural-Network, a free software project that uses machine learning trained on a corpus of 6,000 Kanye West lines to autogenerate new rap songs. Read the rest

An AI wrote a Christmas song

It's not bad. In fact, this is a triumph: a Christmas song written entirely by an artificial intelligence at the University of Toronto. Yet it has that uncanny neural network je ne sais quoi in spades.

I swear it’s Christmas Eve I hope that’s what you say The best Christmas present in the world is a blessing I’ve always been there for the rest of our lives.
Read the rest

Magic cards generated by neural networks

@RoboRosewater is a twitter account that posts, once a day, a Magic: The Gathering card generated by a recurrent neural network. [via Ditto]

This is an implementation of the science described by Vice's Brian Merchant in this article.

Reed Morgan Milewicz, a programmer and computer science researcher, may be the first person to teach an AI to do Magic, literally. Milewicz wowed a popular online MTG forum—as well as hacker forums like Y Combinator’s Hacker News and Reddit—when he posted the results of an experiment to “teach” a weak AI to auto-generate Magic cards. He shared a number of the bizarre “cards” his program had come up with, replete with their properly fantastical names (“Shring the Artist,” “Mided Hied Parira's Scepter”) and freshly invented abilities (“fuseback”). Players devoured the results.

Here's the code, and here's a simple text-only generator.

Magic: The Gathering is Turing-complete. Read the rest

Which neural net has the best hallucinations?

Ville-Matias Heikkilä posted four trips to YouTube, each the work of a different "DeepDreaming"-style neural network (based on the Caffe deep learning framework) but the same source image.

Before training my own dreaming network, I'll need to choose a network layout that suits my needs. In order to learn about the strengths and weaknesses of different layouts, I've run the same guided dreaming tour with four different Imagenet-pretrained models: GoogLeNet, VGG CNN-F, VGG CNN-S and Network-in Network Imagenet model (all available via Caffe model zoo).

The interframe processing is the same for all except NIN which is keen to hallucinate very bright saturated spots, so I decided to couple it with a desaturation filter which effectively produces a gray background. Most of the artifacts you are likely to see stem from the cumulative nature of the interframe processing (not from compression).

[via Hacker News.] Read the rest