Lytro promises focus-free shooting

Discuss

37 Responses to “Lytro promises focus-free shooting”

  1. Anonymous says:

    “This ray-tracing framework provides a general mechanism for handling the undesired non-convergence of rays that is central to the focus problem”

    Focus problem? As the neighborhood photographic purist, I don’t think there’s necessarily anything wrong with photography as it stands today.

    Curious though, I wonder how well it works in low light conditions. I would imagine, from an experimental photo standpoint, you could devise some pretty funky stuff if you wiggled the camera during a long exposure.

  2. Camp Freddie says:

    This is great, though god-knows if/when it’ll be a usefull tech.
    On the one hand, the resolution and lensing requirements seem astronomical.
    On the other hand, 10 years ago you’d be thought of as crazy if you proposed taking 5 megapixel images on a phone and putting a thousand of them on a storage drive the size of a fingernail.

  3. Anonymous says:

    “Insert comment lamenting the Bokeh here”

  4. teufelsdroch says:

    In a related note, Red is coming out with the first HDR camera, capable of capturing 13.5+ stops of exposure level.

    http://www.youtube.com/watch?v=MZ7s6xWP3e0

    Digital photography is just getting warmed up, folks.

  5. nehpetsE says:

    When i try to look at this current page with firefox, i get a big red warning that this link is a Reported Web Forgery!
    (in chrome there is no warning)

    Anyone else having this glitch?

  6. Anonymous says:

    Great idea but where is the prototype?

  7. Pharmalade says:

    Zoom and enhance.

  8. Anonymous says:

    So the result will be a sharp, very low-rez image.

  9. Roger Wilco says:

    Well, now I’ve seen everything.

  10. Anonymous says:

    Is their image gallery meant to allow you to switch focus on the images? I only see static single/focused images.

  11. Anonymous says:

    Awesome stuff.

    Cue photographic purists complaining in 3, 2, 1…..

    Photographic film purists are the best. They get uptight about photoshop when airbrushing, compositing multiple images, dodging and other manipulations have been around for ages.

    Wait till light field cameras do cinema quality video. That will make the job of focus puller obsolete and cut down on retakes.

  12. jfrancis says:

    Fairly low resolution, if I recall…

    (from 2008)

    http://news.cnet.com/8301-13580_3-9876296-39.html#ixzz1PzJpP6nS

  13. Rob Beschizza says:

    Their demo photos are rendered in flash.

  14. frankieboy says:

    So can you keep it all in focus, foreground and background? For a deep focus effect?

  15. pimlottc says:

    The way the demo photos are presenting is very cool.

  16. oldtaku says:

    The quick summary is a microlens directs different rays of light to the same microlens to multiple pixels on the CCD. These areas are non overlapping.

    So if you’re using a 10×10 area for each microlens, the resulting image is 100th the resolution of what a normal camera image would be.

    This doesn’t seem like a horrible tradeoff. I think we’ve reached the apex of useful pixel density on consumer cameras, so instead of a 40 megapixel image, why not a 1.5 MP image with 5×5 lensing for your facebook or webpage pics? Especially given the advantages in light gathering, post-focus, and 3D information.

    It also seems like some clever person could figure out how to still extract the data from overlapping microlenses.

    • bcsizemo says:

      There is an extremely easy answer to that:

      -a non standard format

      Not to point out the obvious, but most home users that are putting pictures on FB are not going to take the time to select their photo settings and export them to jpg to upload them to FB. For heaven sake the most used “camera” on Flickr is the ifone. And that camera is piss poor compared to a good Canon point-in-shoot that is 5 years old.

      Put this technology into something the size of an SLR, make it at least 10 MP “affective” (at least something comparable to the current generation of midrange DLSR). I can professionals using this way more than a casual home user.

      Or for some serious bonus points figure out how to get Adobe to buy into your technology, put it in everything they sell, make it a web standard and voila everyone would use it. (Assuming the file size isn’t horrendous.)

      • phisrow says:

        It isn’t clear that whatever the raw format that this camera spits out is really intended to be the distribution format(not that it would be a secret, they’d obviously want it to be compatible with industry-accepted photo workflow tools; but about as ‘web safe’ as just dumping a .nef or a .crw onto friendtwit and expecting something useful to happen).

        I assume that the selling point would be the (today either impossible, impractical, or requiring seriously zooty lenses) sorts of instantaneous “bracketing” that would allow you to get well focused images of fast moving scenes, where a re-shot isn’t an option, or do assorted arty depth of field stuff without a backpack full of glass. You would then do the usual raw -> distribution format processing step, only with the additional focus tweaking abilities.

  17. Anonymous says:

    frankieboy, there’s no reason it can’t be done; you can already get deep focus images by stacking multiple shallow focus images with varying focal points – something you get automatically with this system.

  18. Anonymous says:

    If I said I have invented and will produce a car that runs on pure water and I point you to information about the steam engine, then throw a launch party without showing you the car, without an estimated date for production or a price, will you also be excited? Will you not ask me “where’s the car”?

    • Anonymous says:

      I might be asking you why you need a trailer full of coal (or otherwise, how you plan on making the steam without burning fossil fuels).

      This is an early days proof of concept as I see it. It’s not a launch party. I’d rather read about these advancements now than in three years time.

      Besides, this is likely going to need a few other things in place beforehand to make it commercially viable. Off the top of my head, support in Photoshop for an infinite focus image. Perhaps extra hardware to cope with the processing. Best get these balls rolling now while there’s time.

  19. Itsumishi says:

    There are many points. As stated by someone else above, there are certain times where focusing completely accurately is difficult to say the least. As a fellow photographer I understand that having everything automated will take some of the magic out of it. As a photoshop whore I’m drooling at the possibilities…

    There are however, many points outside of the realm of photography.

    An example from something I read yesterday explains a current dilema of security cameras. A camera will follow a person moving in a the frame, adjusting the focus as it needs to. If there’s two people in the frame, at different depths, its not possible to follow both. With this sort of setup its possible to have 50 people at different depths all in focus.

    There’s also the advantage of being able to take shots with a long focal depth, without needing to have a long shutter speed.

    Apparently, these things also work very well in low light conditions, but judging by the fact that virtually all the shots on their page seem to be full of bright nice light I’m a bit skeptical about that one at the moment.

  20. Anonymous says:

    Very interesting and innovative but … what’s the point? It doest’t take much time and effort to change the focus point from the spear to that guy or vice versa and shoot another picture. It’s called creativity and it’s good for you, especially in you’re passionate about photography.

    • Anonymous says:

      Very interesting and innovative but … what’s the point? It doest’t take much time and effort to change the focus point from the spear to that guy or vice versa and shoot another picture.

      The point is when you spend that time.

      For example, is it one time only, when you haven’t got the rest of your work-in-progress in front of you, in the middle of a field of biting flies, or is it in your artsy soho loft, surrounded by the other photos that will also be part of the finished work? I’m sure you can think of other examples of when being important.

      It’s called creativity and it’s good for you, especially if you’re passionate about photography.

      Exactly! Now you’re getting it! Enhancing the available options inherently allows greater personal expression!

  21. poagao says:

    Great, even _more_ time to be spent in front of your computer at home trying to fix shots, plus an exponential increase in file sizes and processing power requirements. Add that to combing through each and every frame from 24-frame-per-second HD footage to try and find the “right moment” and you’re basically stuck doing post-processing 24/7.

  22. Michael Smith says:

    So could these pictures be reconstructed as holograms? Do they contain depth information?

  23. bartoncasey says:

    “This doesn’t seem like a horrible tradeoff. I think we’ve reached the apex of useful pixel density on consumer cameras, so instead of a 40 megapixel image, why not a 1.5 MP image with 5×5 lensing for your facebook or webpage pics? Especially given the advantages in light gathering, post-focus, and 3D information.”

    The tradeoff isn’t just image size though. You’re right — we already have too many megapixels. The limiting factor in quality is now the lens: but this demands MUCH more from the lens.

    I believe that a camera that used this process, even with only a 10×10 array (it’s not clear what their demos use), would need staggeringly expensive lenses to be able to produce a final output frame that looked as good as the cheapest SLRs on the market.

    • Anonymous says:

      Actually light field-imaging doesn’t require expensive lenses – it can even potentially
      be used with completely random lenses in a process called compressed sensing.
      It can be shown that for most real-world images a random lens and sensor matrix
      can capture more useful information than a regular lens and sensor with the same number
      of sensing elements. This is related to the same properties which is the reason images
      ,sound and movies can be compressed a lot without loosing much fidelity.

      A random lens will need initial calibration, but a lens with only some amount
      of randomness could actually be calibrated iteratively and become better over time.
      Scratches, dirt etc can also be adapted to compensated for to some degree.

      A light field imaging camera doesn’t even necessarily need the bulky thing we
      usually call a lens. The lens could be spread out and integrated into the camera body.

      The limiting factor is probably processing power and storage needed – at least
      for mobile devices.

  24. Anonymous says:

    Ren has been working on this for a while. This piece from a couple of years back discusses the microlens approach, and two more computational photography options: ‘coded apertures’, where you put a number of different-sized pin holes in the aperture to capture different depths of field at once, and ‘coded exposures’, where you take a number of different length sub exposures, summing to the right total exposure, in order to capture a variety of information about how fast the subject is moving. Useful for deblurring, apparently.

    Also worth noting that Ren talks about being able to code out problems in lenses, such as distortion or chromatic aberration, potentially cutting costs….

  25. AGC says:

    I use primitive 3d rig – 2 old digital cameras mounted side by side. What I found interesting, when overlapping the images, I can change what the focus of the scene is for the person viewing the anaglyph by simply overlapping the two pics in a different place.

  26. traalfaz says:

    ISTM that a lot of 3D information would be retained in the image data too. It’s not a hologram but it’s somewhere between that and a flat image.

  27. teapot says:

    Mind: Blown.

    Many times I have pondered the feasability of this concept. Then some smart people did it.

    @4: haha igadget fail?

  28. oldtaku says:

    Well, I would hope the default export format would be .jpg with everything at all focal lengths perfectly clear, since I think that’s what Joe or Jane Blow would want.

    Obsessive post-processors would indeed be in for a whole new world of hurt.

  29. Anonymous says:

    It’s a “new” design from 1908:

    LIPPMANN G.: Épreuves réversibles donnant la sensation du relief. Journal of Physics 7, 4 (1908), 821–825.

    as referenced in http://www.adobe.com/technology/pdfs/SuperresolutionwithPlenopticCamera2.0.pdf

Leave a Reply