A physics student and an engineering student from Stanford fed 400,000 memes to a Long Short-Term Memory Recurrent Neural Network and asked it to generate more memes of its own.
It did a pretty good job.
We acknowledge that one of the greatest challenges in our project and other language modeling tasks is to capture humor, which varies across people and cultures. In fact, this constitutes a research area on its own, as seen in publishings such as , and accordingly new research ideas on this problem should be incorporated into the meme generation project in t he future. One example would be to train on a dataset that includes the break point in the text be tween upper and lower for the image. These were chosen manually here and are important for the hum or impact of the meme. If the model could learn the breakpoints this would be a huge improvement and could fully automate the meme generation. Another avenue for future work would be to explo re visual attention mechanisms that operate on the images and investigate their role in meme gene ration tasks, based on publishings such as ,  and . Lastly we note that there was a bias in the dataset towards expletive, racist and sexist memes, so yet another possibility for future work would be to address this bias.
Dank Learning: Generating Memes Using Deep Neural Networks [Abel L Peirson V and E Meltem Tolunay/Stanford/Arxiv]
(via Four Short Links)