Inserting 3D Objects into 2D Images with Realistic Lighting

Check out NVIDIA's new neural network that estimates albedo, normals, depth, and the HDR lighting volume from a single image and allows to insert 3D objects with consistent lighting and shadows into 2D images.

Zian Wang, Jonah Philion, Sanja Fidler, and Jan Kautz from NVIDIA have presented a cool neural network that can insert 3D objects into 2D images with realistic lighting. Inspired by classic volume rendering techniques, they proposed a novel Volumetric Spherical Gaussian representation for lighting, which parameterizes the exitant radiance of the 3D scene surfaces on a voxel grid. They designed a physics-based differentiable renderer that utilizes the 3D lighting representation and formulates the energy-conserving image formation process that enables joint training of all intrinsic properties with the re-rendering constraint. Their model ensures physically correct predictions and avoids the need for ground-truth HDR lighting which is not easily accessible. 

"From a single image, our model jointly estimates albedo, normals, depth, and the HDR lighting volume," comments the team. "Our method predicts continuous HDR 3D spatially-varying lighting, which is critical in producing high-quality object insertion with realistic cast shadows and high-frequency details."

You can learn more here. Also, don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more