At The Economist, Glenn Fleishman reports on Lytro's first-to-market implementation of computational photography. The result: you can refocus the shot after taking it.
a novel approach to photographic imaging is making its way into cameras and smartphones. Computational photography, a subdiscipline of computer graphics, conjures up images rather than simply capturing them. More computer animation than pinhole camera, in other words, though using real light refracted through a lens rather than the virtual sort. The basic premise is to use multiple exposures, and even multiple lenses, to capture information from which photographs may be derived. These data contain a raft of potential pictures which software then converts into what, at first blush, looks like a conventional photo.I still don't quite get the talk about ray tracing. The part that makes sense to me, however, seems to explain it all: the camera has a wide-open aperture and an infinite depth of field on the main optics, but a bubble-wrap like plane of different lenses in front of the sensor, which thereby ends up capturing a fly-eye myriad of differently-focused fragments of the same scene. The software assembles a final composite depending on which of these you later focus on in post. It improves upon established focus stacking techniques because every image is taken simultaneously as a single exposure, at the cost of dividing up the sensor's megapixelage between them. Something like that, anyway. I'm going to play Minecraft. Previously: Lytro promises focus-free shooting