In Neural Photo Editing With Introspective Adversarial Networks, a group of University of Edinburgh engineers and a private research colleague describe a method for using "introspective adversarial networks" to edit images in realtime, which they demonstrate in an open project called "Neural Photo Editor" that "enhances" photos by predicting what should be under your brush.
We present an interface,shown in Figure 1, that allows for a more intuitive exploration of a generative
model by indirectly manipulating the latent space with a “contextual paintbrush.” The key idea is
simple: a user selects a paint brush size and color (as with a typical image editor) and paints on
the output image. Instead of changing individual pixels, the interface backpropagates the difference
between the local image patch and the requested color, and takes a gradient descent step in the latent
space to minimize that difference. This step results in globally coherent changes that are semantically
meaningful in the context of the requested color change.
For example, if a user has an image of a person with light skin, dark hair, and a widow’s peak, by
painting a dark color on the forehead, the system will automatically add hair in the requested area.
Similarly, if a user has a photo of a person with a closed-mouth smile, the user can produce a toothy
grin by painting bright white over the target’s mouth. This method is non-iterative in the sense that
a single gradient descent step is taken every time the user requests a change, and runs smoothly in
real-time on a modest laptop GPU.
This method works well for samples directly generated by the network, but fails when applied directly
to existing photos, as it relies on the manipulated image being directly controlled by the latent
variables. Reconstructing images that have passed through such a representational bottleneck (i.e. with an autoencoder) is difficult, and certain to produce reconstructions which, lacking pixel-perfect
accuracy, are useless for making small changes to natural images.
To combat this, we introduce a simple masking technique that allows a user to edit images by
modifying a given photo’s reconstruction, then transferring those changes to the original image.
Neural-Photo-Editor [Andrew Brock/Github]
Neural Photo Editing With Introspective Adversarial Networks [Andrew Brock, Theodore Lim, J.M. Ritchie and Nick Weston/Arxiv]
(via Four Short Links)