Turning AI-Generated Images into 3D Scenes

A great result without using NeRF.

Turning 2D photos into 3D might not be a new concept, but digital artist Aloner One has achieved something novel and impressive: they turned AI-generated images into 3D scenes and refined them in real time in a non-destructive and streamlined workflow without using NeRF.

"It's beyond camera projection since you can make entire scenes viewable in any angles.. It's not a "magic button" solution but a tedious process and I think I'm like the first one to make this," they said on Twitter.

Aloner One also presented an experiment based on a 3D scan – they got the depth of the scan and used it in ControlNet to generate a first texture, projecting it. In case you've missed it, ControlNet lets you adjust diffusion models by adding extra conditions, which makes generated pictures more accurate.

The artist then kept going back and forward between the depth-inpainting mix and projecting in 3D to create the missing texture information.

The tech is still being polished, but the results can potentially open many new doors. Follow Aloner One on Twitter to see more and don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more