3D scanning with a plain webcam

Discuss

17 Responses to “3D scanning with a plain webcam”

  1. Anonymous says:

    Oh man. I want this right now – I’ve spent the last few days building a man’s neck in LightWave and manually tracking it for a VFX shot, and something like this would make that kind of additive effect so much easier!

  2. Anonymous says:

    This is basically Photosynth which everyone can do now http://www.photosynth.com

  3. Ian70 says:

    Give this man his PhD. This is truly awesome.

  4. TikiHead says:

    Strata has had something sorta similar for a while now, but it uses a paper reference disk under the object, and multiple still photos.

    This is very cool.

  5. Anonymous says:

    To be sure, this isn’t a new thing; Qi Pan has made refinements and improvements on a technique that’s been around for a while. I saw a very similar demo in 1995, though the resulting model wasn’t as perfect, and I don’t think the processing was done in real time back then.

    A lot of the early research in this area was from military reconnaissance, AIUI. Fly some aircraft over an area with movie cameras, postprocess the films on the ground, and you have a detailed topographic map including all structures etc.

  6. Anonymous says:

    We’ve just put an interview with Qi Pan, the PHD researcher behind ProForma live on Shapeways: http://www.shapeways.com/blog/archives/332-Interview-with-Qi-Pan-about-his-Webcam-3D-scanner-proForma.html

    Thought you guys might like to know.

  7. robcat2075 says:

    It’s impressive and at the same time it falls way short of being usable if the goal is, as stated, “model acquisition”

    Back to the lab with you!

    • SamSam says:

      It’s impressive and at the same time it falls way short of being usable if the goal is, as stated, “model acquisition”

      What do you mean? It acquires a 3d model just by turning the object around in front of the camera. How does it fall short of its goal?

  8. jokel says:

    This is a vast step beyond the “reference picture” acquisition model which is becoming trivial thanks to ever-evolving projects. You can tell it’s a lab picture, of course: there is no attempt at making this appear better than it currently is. Given that this is a PhD candidate’s effort though, given some luck, Qi Pan’ll get funding and perfect this technique. Can you imagine how this might improve virtual acting (given enough cameras), for example? No akward styrofoam balls stuck to you, not weird exaggerations required.

    Colour me impressed.

  9. Daemon says:

    Give it a few years, and they’ll be using a variation of this technology to make realldolls that look exactly like your celebrity of choice.

  10. Anonymous says:

    What I have seen in the past that matches this in anyway is using structured light and a video camera. It is faster and generates a 3D model at 60 Hz however not with color, and perhaps a bit grainier. It does seem to be more correct in the geometry itself. The idea is similar with turning an object around in front of the camera. I’d like to see what he could do with two cameras and a projector. Also what does it do to a human body? One issue is that this seems to require camera to remain stationary. The implementation I just mentioned uses a handheld video camera which you just move around an object you might hold in your hand if you wish. These approaches need to be melded to allow you to zoom in and match different inputs (color cam, laser, structured light) to gain the maximum in high speed acquisition, resolution, and accuracy of geometry and color. I like most how this is not a point cloud (which he starts with to find landmarks I guess) but actually a carving determined by where the landmark probably is, hidden around the corner. It’s cool and seems to match a bit what the human brain does perhaps. Drawback is how the geometry can get messed up and stay that way, vis the webbing left between steeple and nave. I’m not up to the latest though, perhaps someone who has studied the area could answer? These video based approaches seem to be fast and partially self-adjusting (as his finger is mostly not captured)

  11. arkizzle / Moderator says:

    This is great. Nice interface with the AR overlaid instructions, too.

  12. sciguy77 says:

    Incredible. I can’t wait for the download, I hope the model can be exported as a .fdx or .obj for a Unity project.

  13. pixleshifter says:

    on his website he says he’ll be releasing a linux version of the software, followed by a windows version.
    signup to keep updated, hopefully he’ll be inspired to hurry along if he sees there’s enough interest

  14. Anonymous says:

    This is nothing like photosynth. Photosynth is just a bunch of photos connected together. There is nothing 3d about it at all.

Leave a Reply