"NeRF" turns 2D photos into convincing 3D scenes

NVidia's NeRF is an AI-driven tech that takes standard 2D photos and builds a 3D scene out of them. It's much more comprehensive than prior efforts, with low-fidelity junk and voids in the peripheries. The AI fills it all in—very convincingly—in moments.

Known as inverse rendering, the process uses AI to approximate how light behaves in the real world, enabling researchers to reconstruct a 3D scene from a handful of 2D images taken at different angles. The NVIDIA Research team has developed an approach that accomplishes this task almost instantly — making it one of the first models of its kind to combine ultra-fast neural network training and rapid rendering.

NVIDIA applied this approach to a popular new technology called neural radiance fields, or NeRF. The result, dubbed Instant NeRF, is the fastest NeRF technique to date, achieving more than 1,000x speedups in some cases. The model requires just seconds to train on a few dozen still photos  — plus data on the camera angles they were taken from — and can then render the resulting 3D scene within tens of milliseconds.

It turns out that the deepfake scene from 1987's Running Man (embedded below) was not only disarmingly good, but has arrived in reality on its science-fictional schedule (It's set in 2017, but a few more years pass in-movie by the time we reach this scene)