A New Advanced Approach To Rendering 3D Scenes From A Set Of Photos

This new model looks impressive.

Check out a new algorithm that studies a set of photos and turns it into a 3D scene that can be explored. The team says that the point cloud rendering is performed by a differentiable renderer using multi-resolution one-pixel point rasterization.

"Spatial gradients of the discrete rasterization are approximated by the novel concept of ghost geometry. After rendering, the neural image pyramid is passed through a deep neural network for shading calculations and hole-filling. A differentiable, physically-based tonemapper then converts the intermediate output to the target image," states the abstract. "Since all stages of the pipeline are differentiable, we optimize all of the scene's parameters i.e. camera model, camera pose, point position, point color, environment map, rendering network weights, vignetting, camera response function, per image exposure, and per image white balance."

The team thinks their new approach can synthesize sharper and more consistent novel views than existing approaches "because the initial reconstruction is refined during training". Sadly, no code is available yet but you can now check out a paper from the team available here. 

Don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more