A New Approach to Reconstructing Hands From RGB Images

This model can definitely help AR developers.

Are you specialized in AR projects? You might like this new research, focused on recreating hands from RGB images, that was originally discussed in the paper "Towards Accurate Alignment in Real-time 3D Hand-Mesh Reconstruction" and then presented at ICCV 2021.

3D hand-mesh reconstruction from RGB images can be used in different cases, including augmented reality (AR) and mixed reality experiences, but the task requires real-time speed, accurate hand pose, plus plausible mesh-image alignment, and meeting all three usually turns out to be a challenge for developers. 

The team of Xiao Tang, Tianyu Wang, Chi-Wing Fu offers to divide the hand-mesh reconstruction task into three stages: a joint stage to predict hand joints and segmentation; a mesh stage to predict a rough hand mesh; and a refine stage to fine-tune it with an offset mesh for mesh-image alignment.

"With careful design in the network structure and in the loss functions, we can promote high-quality finger-level mesh-image alignment and drive the models together to deliver real-time predictions," states the abstract. "Extensive quantitative and qualitative results on benchmark datasets demonstrate that the quality of our results outperforms the state-of-the-art methods on hand-mesh/pose precision and hand-image alignment. In the end, we also showcase several real-time AR scenarios."

You can check out the full paper here and find the code on GitHub. Also, don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more