If you have a photo in need of foliage, fire up Photoshop and generate for a 3D rendered Sakura Cherry Blossom, Redwood, Young Maple, Palm, or nearly three dozen other trees. Just go to Filter Render Tree and let your creativity, er, take root. This curious feature even enables you to tweak the leaf sizes, branch height, and other variables. But what's the story behind this curious feature? From Input:
Read the rest
The trees, it turns out, came in response to architectural artists who wanted to be able to drop trees into their work but struggled to smoothly integrate them into the image. Before the tree filter was introduced in 2014, designers would have to cut out a preexisting image of a tree taken at the right angle and then paste it in.
“We thought it would be convenient if you could generate customizable trees that fit illustrations,” says Daichi Ito, the technical research artist who developed the tree filter for Adobe. “By ‘fit,’ I mean it doesn’t have a strong style; it’s somewhat realistic, but not photorealistic.”
Ito created the project as part of the development of an engine, codenamed Deco, that would help Photoshop create generative patterns. “Daichi came to us and said, ‘I can actually write a bunch of interesting scripts that leverage that Deco engine and allow us to generate all kinds of things,’” recalls Stephen Nielson, director of product management for Photoshop at Adobe. Ito spent a month writing the algorithm that created the generative images. “Generating tree data took me some more time,” he adds.
Pyotr Basin (1793–1877) painted "The earthquake in Rocca di Papa, near Rome," in 1830. According to Bruce Sterling, these images are the result of a "couple of guys screwing around attacking a 19th century Russian painting with deep-dreamers." I can't find anything else about them but they're fantastic.
Read the rest
Jason Bailey is curating a show of generative art, among the first major retrospectives of computer-mediated work. It comes at an important time, too, as the art business's Morf Vandewalts fuss over machine learning as if it were born yesterday.
Read the rest
In the last twelve months we have seen a tremendous spike in the interest of “AI art,” ushered in by Christie’s and Sotheby’s both offering works at auction developed with machine learning. Capturing the imaginations of collectors and the general public alike, the new work has some conservative members of the art world scratching their heads and suggesting this will merely be another passing fad. What they are missing is that this rich genre, more broadly referred to as “generative art,” has a history as long and fascinating as computing itself. A history that has largely been overlooked in the recent mania for “AI art” and one that co-curators Georg Bak and Jason Bailey hope to shine a bright light on in their upcoming show Automat und Mensch (or Machine and Man) at Kate Vass Galerie in Zurich, Switzerland.
Oblique Strategies (Over One Hundred Worthwhile Dilemmas), released in 1975 by Brian Eno and the late British multimedia artist Peter Schmidt, is a deck of 100+ cards with evocative statements designed for musicians, artists, and others who find their creative imaginations stuck in a ditch. The minimal, modern black and white cards, housed in an equally stark black box, present strange and evocative statements and directives that the querant agrees to follow and allow to inform the current phase of his or her work. Here, let me draw a couple of cards. I got: "Tidy up," "When is it for?," "Give way to your worst impulse," and "Look at the order in which you do things."
If you need any testament to the efficacy of these cards, they were used in the studio to aid in the composition and engineering of tracks on Eno's Another Green World and Before and After Science, Bowie's Berlin-period records (Low, "Heroes," Lodger), and again on Bowie's 1995 record, Outside (among many other records).
I fell in love with Oblique Strategies when I fell in love with all of the above records that were created with its assistance. Oblique Strategies was one of the first apps I installed on my first iPhone (and have had on every phone since). When I heard, in the early aughts, that the long out-of-print deck was back in print, I jumped at the chance to finally own a physical copy. In my world of hoodoo and woo-woo, oracular cards should be physical. Read the rest
Frederik Vanhoutte describes himself as a creative coder who works in the field of generative art. His site W:BLUT has lots of cool little experiments. Above, Big Red I, a longer fractal experiment that evokes FRank Lloyd Wright. Read the rest
Damien Henry, co-inventor of Google Cardboard, trained a machine learning algorithm using footage shot from a moving vehicle and then had the machine generate this beautiful video.
"Graphics are 100% generated by an algorithm in one shot. No edit or post-processing," Henry writes. "Except the first one, all frames are calculated one by one by a prediction algorithm that tries to predict the next frame from the previous one."
The soundtrack is the Steve Reich masterpiece "Music for 18 Musicians."
Read the rest
Paavo Toivanen wrote code that generates uncannily human, but utterly meaningless and illegible writing. Toivanen's thoughts on generative art are worth reading.
Generative art should ideally retain two disparate levels of perception: the material and visual qualities of a piece of art, and then a creation story or script and the intellectual journey that led to the end result.
The creation story is what's missing in most generative art, especially when it's presented as a representation of nature. Read the rest
Linify Me accepts JPG uploads and redraws the images using only straight lines. The effect is ghostly yet technical, resembling something human-drawn but not enough to be confused as such. Watching the picture emerge over time is strangely meditative. Unless you've uploaded a picture of Trump, that is, in which case it's just another example of something slowly going wrong on a computer.
Read the rest
Kent sez, "Enter your Twitter handle and watch as your tiny online avatar turns into large-scale generative art. Results can look like batik, pastel, or tie-dye, depending on the original."
We're calling the Twitter API from Yahoo! Query Language, receiving an image URL for your avatar, converting it to a data:uri, and returning its base64-encoded value as JSON with a callback.
Then we create an image on the client, load it with the data YQL gave us, and stretch it to fit our (comparatively very large) canvas tag.
Since we've created the image locally, the usual canvas security restrictions don't apply and we're free to sample pixels. We do this, collecting color values and positions, and then we start drawing circles with random sizes and tiny random offsets from where each color sample was taken.
Avatar Portraits .:. kentbrewster.com
(Thanks, Kent!) Read the rest