NeRF: Reconstructing Scenes from Input Images

Check out this new method for high-quality novel view synthesis from a collection of posed input images.

Ben Mildenhall, Peter Hedman, Ricardo Martin-Brualla, Pratul Srinivasan, and Jonathan T. Barron from Google Research have presented a great new technique for creating a high-quality scene from a collection of input images. The technique is called Neural Radiance Fields, or "NeRF" for short, and it uses tonemapped low dynamic range (LDR) as input.

The main distinctive feature of NeRF is that it also works as a multi-image denoiser capable of combining information from tens or hundreds of input images. This robustness to noise means that the technique can be used to reconstruct scenes captured in the dark.

"We modify NeRF to train directly on linear raw images, preserving the scene's full dynamic range. By rendering raw output images from the resulting NeRF, we can perform novel high dynamic range (HDR) view synthesis tasks. In addition to changing the camera viewpoint, we can manipulate focus, exposure, and tonemapping after the fact," comments the team. "Although a single raw image appears significantly more noisy than a post-processed one, we show that NeRF is highly robust to the zero-mean distribution of raw noise. When optimized over many noisy raw inputs (25-200), NeRF produces a scene representation so accurate that its rendered novel views outperform dedicated single and multi-image deep raw denoisers run on the same wide baseline input images. As a result, our method, which we call RawNeRF, can reconstruct scenes from extremely noisy images captured in near-darkness."

Learn more here. Also, don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more. 

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more