In her delightful blog AI Weirdness, Janelle Shane entered 18,458 unique bills introduced in Massachusetts into a neural network, which then created some rather hilarious bills, including: Read the rest
Read the rest
I used a program called a character-level recurrent neural network (char-rnn), which looks at examples of text (Pokemon, or Harry Potter fan fiction, or even guinea pig names) and learns to imitate them. I gave the neural network more than 1,500 names from My Little Pony Friendship is Magic Wiki, and let it start learning.
The AI paint name generator (previously) has refined its preferences. Though still very bad at naming paint colors, there seems to be (to my mind) an emerging personality, one that has beliefs and, perhaps, opinions about its creators.
Pictured at the top of this post, for reference, is the human-named classic Opaque Couché.
Jacques Mattheij hoped to make some cash buying cheap boxes of used, unsorted Lego that he'd organize into more valuable assortments for resale. After acquiring two metric tons of bricks, he was motivated to build a technological solution for sorting. He outfitted a conveyor belt with a cheap magnifying USB camera and employed air nozzles to blow the bricks into various bins. The bigger challenge though was how to get the PC to identify the bricks. From IEEE Spectrum:
Read the rest
After a few other failed approaches, and six months in, I decided to try out a neural network. I settled on using TensorFlow, an immense library produced by the Google Brain Team. TensorFlow can run on a CPU, but for a huge speed increase I tapped the parallel computing power of the graphics processing unit in my US $700 GTX1080 Ti Nvidia video card....
...I managed to label a starter set of about 500 assorted scanned pieces. Using those parts to train the net, the next day the machine sorted 2,000 more parts. About half of those were wrongly labeled, which I corrected. The resulting 2,500 parts were the basis for the next round of training. Another 4,000 parts went through the machine, 90 percent of which were labeled correctly! So, I had to correct only some 400 parts. By the end of two weeks I had a training data set of 20,000 correctly labeled images...
Once the software is able to reliably classify across the entire range of parts in my garage, I’ll be pushing through the remainder of those two tons of bricks.
Neural networks, it is said, cannot explain their decisions. Which is probably a good thing, at least when it comes to the machine mind's ideas for new Dungeons & Dragons spells, as guided by Janelle Shane. [via Patrick Ziselberger]
It’s a really small dataset, actually - so small that in almost no time at all, it learned to reproduce the original input data verbatim, in order. But by setting the “temperature” flag to a really high value (i.e. it has a higher chance of NOT going with its best guess for the next character in the phrase), I can at least induce spelling mistakes. Then the neural network has to try to recover from these, with often entertaining results.
Moss Healing Word Hold Mouse Barking Sphere Heat on Farm True Steake Finger of Enftebtemang Fomend’s Beating Sphere
Read the rest
Pears Or To Garnestmeam
¼ lb bones or fresh bread; optional½ cup flour1 teaspoon vinegar¼ teaspoon lime juice2 eggs
Brown salmon in oil. Add creamed meat and another deep mixture.Discard filets. Discard head and turn into a nonstick spice. Pour 4 eggs onto clean a thin fat to sink halves.
Brush each with roast and refrigerate. Lay tart in deep baking dish in chipec sweet body; cut oof with crosswise and onions. Remove peas and place in a 4-dgg serving. Cover lightly with plastic wrap. Chill in refrigerator until casseroles are tender and ridges done. Serve immediately in sugar may be added 2 handles overginger or with boiling water until very cracker pudding is hot.
Yield: 4 servings
This is from a network that’s been trained for a relatively long time - starting from a complete unawareness of whether it’s looking at prose or code, English or Spanish, etc, it’s already got a lot of the vocabulary and structure worked out. This is particularly impressive given that it has the memory of a goldfish - it can only analyze 65 characters at a time, so by the time it begins the instructions, the recipe title has already passed out of its memory, and it has to guess what it’s making.
Robbie Barrat is president and founder of their high school computer science club; they created Rapper-Neural-Network, a free software project that uses machine learning trained on a corpus of 6,000 Kanye West lines to autogenerate new rap songs. Read the rest
It's not bad. In fact, this is a triumph: a Christmas song written entirely by an artificial intelligence at the University of Toronto. Yet it has that uncanny neural network je ne sais quoi in spades.
I swear it’s Christmas Eve I hope that’s what you say The best Christmas present in the world is a blessing I’ve always been there for the rest of our lives.Read the rest
Reed Morgan Milewicz, a programmer and computer science researcher, may be the first person to teach an AI to do Magic, literally. Milewicz wowed a popular online MTG forum—as well as hacker forums like Y Combinator’s Hacker News and Reddit—when he posted the results of an experiment to “teach” a weak AI to auto-generate Magic cards. He shared a number of the bizarre “cards” his program had come up with, replete with their properly fantastical names (“Shring the Artist,” “Mided Hied Parira's Scepter”) and freshly invented abilities (“fuseback”). Players devoured the results.
Before training my own dreaming network, I'll need to choose a network layout that suits my needs. In order to learn about the strengths and weaknesses of different layouts, I've run the same guided dreaming tour with four different Imagenet-pretrained models: GoogLeNet, VGG CNN-F, VGG CNN-S and Network-in Network Imagenet model (all available via Caffe model zoo).
The interframe processing is the same for all except NIN which is keen to hallucinate very bright saturated spots, so I decided to couple it with a desaturation filter which effectively produces a gray background. Most of the artifacts you are likely to see stem from the cumulative nature of the interframe processing (not from compression).