Cory Doctorow at 1:14 pm Fri, Nov 20, 2009
ADVERTISE AT BOING BOING!
Coming soon to a science fiction plot near you: with the right software, a plain-jane webcam can be a 3D scanner. It's a project from Qi Pan, a PhD candidate at Cambridge University Engineering Department.
ProFORMA: Probabilistic Feature-based On-line Rapid Model Acquisition
Oh man. I want this right now – I’ve spent the last few days building a man’s neck in LightWave and manually tracking it for a VFX shot, and something like this would make that kind of additive effect so much easier!
This is basically Photosynth which everyone can do now http://www.photosynth.com
Photosynth is nothing like this. This is more like DAVID Laserscan without the requirement of a laser.
This is absolutely nothing like photosynth.
Give this man his PhD. This is truly awesome.
Strata has had something sorta similar for a while now, but it uses a paper reference disk under the object, and multiple still photos.
This is very cool.
To be sure, this isn’t a new thing; Qi Pan has made refinements and improvements on a technique that’s been around for a while. I saw a very similar demo in 1995, though the resulting model wasn’t as perfect, and I don’t think the processing was done in real time back then.
A lot of the early research in this area was from military reconnaissance, AIUI. Fly some aircraft over an area with movie cameras, postprocess the films on the ground, and you have a detailed topographic map including all structures etc.
We’ve just put an interview with Qi Pan, the PHD researcher behind ProForma live on Shapeways: http://www.shapeways.com/blog/archives/332-Interview-with-Qi-Pan-about-his-Webcam-3D-scanner-proForma.html
Thought you guys might like to know.
It’s impressive and at the same time it falls way short of being usable if the goal is, as stated, “model acquisition”
Back to the lab with you!
What do you mean? It acquires a 3d model just by turning the object around in front of the camera. How does it fall short of its goal?
This is a vast step beyond the “reference picture” acquisition model which is becoming trivial thanks to ever-evolving projects. You can tell it’s a lab picture, of course: there is no attempt at making this appear better than it currently is. Given that this is a PhD candidate’s effort though, given some luck, Qi Pan’ll get funding and perfect this technique. Can you imagine how this might improve virtual acting (given enough cameras), for example? No akward styrofoam balls stuck to you, not weird exaggerations required.
Colour me impressed.
Give it a few years, and they’ll be using a variation of this technology to make realldolls that look exactly like your celebrity of choice.
What I have seen in the past that matches this in anyway is using structured light and a video camera. It is faster and generates a 3D model at 60 Hz however not with color, and perhaps a bit grainier. It does seem to be more correct in the geometry itself. The idea is similar with turning an object around in front of the camera. I’d like to see what he could do with two cameras and a projector. Also what does it do to a human body? One issue is that this seems to require camera to remain stationary. The implementation I just mentioned uses a handheld video camera which you just move around an object you might hold in your hand if you wish. These approaches need to be melded to allow you to zoom in and match different inputs (color cam, laser, structured light) to gain the maximum in high speed acquisition, resolution, and accuracy of geometry and color. I like most how this is not a point cloud (which he starts with to find landmarks I guess) but actually a carving determined by where the landmark probably is, hidden around the corner. It’s cool and seems to match a bit what the human brain does perhaps. Drawback is how the geometry can get messed up and stay that way, vis the webbing left between steeple and nave. I’m not up to the latest though, perhaps someone who has studied the area could answer? These video based approaches seem to be fast and partially self-adjusting (as his finger is mostly not captured)
This is great. Nice interface with the AR overlaid instructions, too.
Incredible. I can’t wait for the download, I hope the model can be exported as a .fdx or .obj for a Unity project.
on his website he says he’ll be releasing a linux version of the software, followed by a windows version.
signup to keep updated, hopefully he’ll be inspired to hurry along if he sees there’s enough interest
This is nothing like photosynth. Photosynth is just a bunch of photos connected together. There is nothing 3d about it at all.
Mail (will not be published) (required)
Michael Wolf took 100 photos of people living in Hong Kong's oldest public housing estate. Each flat is 100 square feet.
An after-Thanksgiving treat for the whole family...
Scrooge has no honor, nor any courage. Can three ghosts help him to become the true warrior he ought to be in time to save Tiny Tim from a horrible fate?
Matt Logue says: I just completed a self-published book depicting an uninhabited Los Angeles, and it got an honorable mention in the photography.book.now competition at blurb.com!
Mark Frauenfelder at 12:37 pm Fri, Nov 20, 2009
Maggie Koerth-Baker at 12:22 pm Fri, Nov 20, 2009
Mark Frauenfelder at 12:08 pm Fri, Nov 20, 2009