Lytro promises focus-free shooting

Lytro.jpg A new camera sensor design from Lytro captures light in such a way that the focus can be changed in post. Check out the demonstration images at its homepage, and the CEO's dissertation on how it works:
My proposed solution to the focus problem exploits the abundance of digital image sensor resolution to sample each individual ray of light that contributes to the final image. ... To record the light field inside the camera, digital light field photography uses a microlens array in front of the photosensor. Each microlens covers a small array of photosensor pixels. The microlens separates the light that strikes it into a tiny image on this array, forming a miniature picture of the incident lighting. This samples the light field inside the camera in a single photographic exposure. ... To process final photographs from the recorded light field, digital light field photography uses ray-tracing techniques. The idea is to imagine a camera conigured as desired, and trace the recorded light rays through its optics to its imaging plane. Summing the light rays in this imaginary image produces the desired photograph. This ray-tracing framework provides a general mechanism for handling the undesired non-convergence of rays that is central to the focus problem. What is required is imagining a camera in which the rays converge as desired in order to drive the final image computation.
This sounds like a plenoptic setup, similar to one demoed by Adobe here. [Thanks, Jim!]


  1. Is their image gallery meant to allow you to switch focus on the images? I only see static single/focused images.

  2. Awesome stuff.

    Cue photographic purists complaining in 3, 2, 1…..

    Photographic film purists are the best. They get uptight about photoshop when airbrushing, compositing multiple images, dodging and other manipulations have been around for ages.

    Wait till light field cameras do cinema quality video. That will make the job of focus puller obsolete and cut down on retakes.

  3. The quick summary is a microlens directs different rays of light to the same microlens to multiple pixels on the CCD. These areas are non overlapping.

    So if you’re using a 10×10 area for each microlens, the resulting image is 100th the resolution of what a normal camera image would be.

    This doesn’t seem like a horrible tradeoff. I think we’ve reached the apex of useful pixel density on consumer cameras, so instead of a 40 megapixel image, why not a 1.5 MP image with 5×5 lensing for your facebook or webpage pics? Especially given the advantages in light gathering, post-focus, and 3D information.

    It also seems like some clever person could figure out how to still extract the data from overlapping microlenses.

    1. There is an extremely easy answer to that:

      -a non standard format

      Not to point out the obvious, but most home users that are putting pictures on FB are not going to take the time to select their photo settings and export them to jpg to upload them to FB. For heaven sake the most used “camera” on Flickr is the ifone. And that camera is piss poor compared to a good Canon point-in-shoot that is 5 years old.

      Put this technology into something the size of an SLR, make it at least 10 MP “affective” (at least something comparable to the current generation of midrange DLSR). I can professionals using this way more than a casual home user.

      Or for some serious bonus points figure out how to get Adobe to buy into your technology, put it in everything they sell, make it a web standard and voila everyone would use it. (Assuming the file size isn’t horrendous.)

      1. It isn’t clear that whatever the raw format that this camera spits out is really intended to be the distribution format(not that it would be a secret, they’d obviously want it to be compatible with industry-accepted photo workflow tools; but about as ‘web safe’ as just dumping a .nef or a .crw onto friendtwit and expecting something useful to happen).

        I assume that the selling point would be the (today either impossible, impractical, or requiring seriously zooty lenses) sorts of instantaneous “bracketing” that would allow you to get well focused images of fast moving scenes, where a re-shot isn’t an option, or do assorted arty depth of field stuff without a backpack full of glass. You would then do the usual raw -> distribution format processing step, only with the additional focus tweaking abilities.

  4. frankieboy, there’s no reason it can’t be done; you can already get deep focus images by stacking multiple shallow focus images with varying focal points – something you get automatically with this system.

  5. Great, even _more_ time to be spent in front of your computer at home trying to fix shots, plus an exponential increase in file sizes and processing power requirements. Add that to combing through each and every frame from 24-frame-per-second HD footage to try and find the “right moment” and you’re basically stuck doing post-processing 24/7.

  6. “This doesn’t seem like a horrible tradeoff. I think we’ve reached the apex of useful pixel density on consumer cameras, so instead of a 40 megapixel image, why not a 1.5 MP image with 5×5 lensing for your facebook or webpage pics? Especially given the advantages in light gathering, post-focus, and 3D information.”

    The tradeoff isn’t just image size though. You’re right — we already have too many megapixels. The limiting factor in quality is now the lens: but this demands MUCH more from the lens.

    I believe that a camera that used this process, even with only a 10×10 array (it’s not clear what their demos use), would need staggeringly expensive lenses to be able to produce a final output frame that looked as good as the cheapest SLRs on the market.

    1. Actually light field-imaging doesn’t require expensive lenses – it can even potentially
      be used with completely random lenses in a process called compressed sensing.
      It can be shown that for most real-world images a random lens and sensor matrix
      can capture more useful information than a regular lens and sensor with the same number
      of sensing elements. This is related to the same properties which is the reason images
      ,sound and movies can be compressed a lot without loosing much fidelity.

      A random lens will need initial calibration, but a lens with only some amount
      of randomness could actually be calibrated iteratively and become better over time.
      Scratches, dirt etc can also be adapted to compensated for to some degree.

      A light field imaging camera doesn’t even necessarily need the bulky thing we
      usually call a lens. The lens could be spread out and integrated into the camera body.

      The limiting factor is probably processing power and storage needed – at least
      for mobile devices.

  7. I use primitive 3d rig – 2 old digital cameras mounted side by side. What I found interesting, when overlapping the images, I can change what the focus of the scene is for the person viewing the anaglyph by simply overlapping the two pics in a different place.

  8. Mind: Blown.

    Many times I have pondered the feasability of this concept. Then some smart people did it.

    @4: haha igadget fail?

  9. Well, I would hope the default export format would be .jpg with everything at all focal lengths perfectly clear, since I think that’s what Joe or Jane Blow would want.

    Obsessive post-processors would indeed be in for a whole new world of hurt.

  10. “This ray-tracing framework provides a general mechanism for handling the undesired non-convergence of rays that is central to the focus problem”

    Focus problem? As the neighborhood photographic purist, I don’t think there’s necessarily anything wrong with photography as it stands today.

    Curious though, I wonder how well it works in low light conditions. I would imagine, from an experimental photo standpoint, you could devise some pretty funky stuff if you wiggled the camera during a long exposure.

  11. In a related note, Red is coming out with the first HDR camera, capable of capturing 13.5+ stops of exposure level.

    Digital photography is just getting warmed up, folks.

  12. Very interesting and innovative but … what’s the point? It doest’t take much time and effort to change the focus point from the spear to that guy or vice versa and shoot another picture. It’s called creativity and it’s good for you, especially in you’re passionate about photography.

    1. Very interesting and innovative but … what’s the point? It doest’t take much time and effort to change the focus point from the spear to that guy or vice versa and shoot another picture.

      The point is when you spend that time.

      For example, is it one time only, when you haven’t got the rest of your work-in-progress in front of you, in the middle of a field of biting flies, or is it in your artsy soho loft, surrounded by the other photos that will also be part of the finished work? I’m sure you can think of other examples of when being important.

      It’s called creativity and it’s good for you, especially if you’re passionate about photography.

      Exactly! Now you’re getting it! Enhancing the available options inherently allows greater personal expression!

  13. So could these pictures be reconstructed as holograms? Do they contain depth information?

  14. Ren has been working on this for a while. This piece from a couple of years back discusses the microlens approach, and two more computational photography options: ‘coded apertures’, where you put a number of different-sized pin holes in the aperture to capture different depths of field at once, and ‘coded exposures’, where you take a number of different length sub exposures, summing to the right total exposure, in order to capture a variety of information about how fast the subject is moving. Useful for deblurring, apparently.

    Also worth noting that Ren talks about being able to code out problems in lenses, such as distortion or chromatic aberration, potentially cutting costs….

  15. ISTM that a lot of 3D information would be retained in the image data too. It’s not a hologram but it’s somewhere between that and a flat image.

  16. This is great, though god-knows if/when it’ll be a usefull tech.
    On the one hand, the resolution and lensing requirements seem astronomical.
    On the other hand, 10 years ago you’d be thought of as crazy if you proposed taking 5 megapixel images on a phone and putting a thousand of them on a storage drive the size of a fingernail.

  17. When i try to look at this current page with firefox, i get a big red warning that this link is a Reported Web Forgery!
    (in chrome there is no warning)

    Anyone else having this glitch?

  18. There are many points. As stated by someone else above, there are certain times where focusing completely accurately is difficult to say the least. As a fellow photographer I understand that having everything automated will take some of the magic out of it. As a photoshop whore I’m drooling at the possibilities…

    There are however, many points outside of the realm of photography.

    An example from something I read yesterday explains a current dilema of security cameras. A camera will follow a person moving in a the frame, adjusting the focus as it needs to. If there’s two people in the frame, at different depths, its not possible to follow both. With this sort of setup its possible to have 50 people at different depths all in focus.

    There’s also the advantage of being able to take shots with a long focal depth, without needing to have a long shutter speed.

    Apparently, these things also work very well in low light conditions, but judging by the fact that virtually all the shots on their page seem to be full of bright nice light I’m a bit skeptical about that one at the moment.

  19. If I said I have invented and will produce a car that runs on pure water and I point you to information about the steam engine, then throw a launch party without showing you the car, without an estimated date for production or a price, will you also be excited? Will you not ask me “where’s the car”?

    1. I might be asking you why you need a trailer full of coal (or otherwise, how you plan on making the steam without burning fossil fuels).

      This is an early days proof of concept as I see it. It’s not a launch party. I’d rather read about these advancements now than in three years time.

      Besides, this is likely going to need a few other things in place beforehand to make it commercially viable. Off the top of my head, support in Photoshop for an infinite focus image. Perhaps extra hardware to cope with the processing. Best get these balls rolling now while there’s time.

Comments are closed.