The cover story in this week's Science news is about computational photography, new methods to capture an image that result in a profoundly different photograph than is possible using a traditional camera (analog or digital). For example, a group at Columbia University creates 3D models by putting a mirrored cone around the lens. The cone enables the camera to capture the subject from multiple points of view that are used in the construction of a 3D digital model. And my old friend Paul Debevec at the University of Southern California can accurately alter the lighting in an image after its shot by calculating how the subject would appear under any lighting conditions. The image seen here shows how Debevec uses a room filled with hundreds of flashes to capture the various lighting combinations.
From Science News:
Another alteration of a camera's field of view makes it possible to shoot a picture first and focus it later. Todor Georgiev, a physicist working on novel camera designs at Adobe, the San Jose, Calif.–based company that produces Photoshop, has developed a lens that splits the scene that a camera captures into many separate images.
Georgiev's group etched a grid of square minilenses into a lens, making it look like an insect's compound eye. Each minilens creates a separate image of the scene, effectively shooting the scene from 20 slightly different vantage points. Software merges the mini-images into a single image that the photographer can focus and refocus at will. The photographer can even slightly change the apparent vantage point of the camera. The team described this work last year in Cyprus at the Eurographics Symposium on Rendering.
In essence, the technique replaces the camera's focusing lens with a virtual lens.