GVA: Reconstructing Vivid 3D Gaussian Avatars from Monocular Videos

GVA: Reconstructing Vivid 3D Gaussian Avatars from Monocular Videos

19 Mar 2024 | Xinqi Liu Chenming Wu Jialun Liu Xing Liu Jinbo Wu Chen Zhao Haocheng Feng Errui Ding Jingdong Wang
This paper presents a novel method, GVA (Gaussian Volumetric Avatar), for creating vivid 3D Gaussian avatars from monocular video inputs. The key contributions of the paper are twofold: first, a pose refinement technique that improves the accuracy of hand and foot poses by aligning normal maps and silhouettes; second, a surface-guided re-initialization method that addresses unbalanced aggregation and initialization bias, ensuring accurate alignment of 3D Gaussian points with avatar surfaces. The proposed method achieves high-fidelity and vivid 3D Gaussian avatar reconstruction, as demonstrated through extensive experimental analyses that validate its performance in photo-realistic novel view synthesis and fine-grained control over the human body and hand pose. The project page is available at: https://3d-aigc.github.io/GVA/.This paper presents a novel method, GVA (Gaussian Volumetric Avatar), for creating vivid 3D Gaussian avatars from monocular video inputs. The key contributions of the paper are twofold: first, a pose refinement technique that improves the accuracy of hand and foot poses by aligning normal maps and silhouettes; second, a surface-guided re-initialization method that addresses unbalanced aggregation and initialization bias, ensuring accurate alignment of 3D Gaussian points with avatar surfaces. The proposed method achieves high-fidelity and vivid 3D Gaussian avatar reconstruction, as demonstrated through extensive experimental analyses that validate its performance in photo-realistic novel view synthesis and fine-grained control over the human body and hand pose. The project page is available at: https://3d-aigc.github.io/GVA/.
Reach us at info@study.space
[slides and audio] GVA%3A Reconstructing Vivid 3D Gaussian Avatars from Monocular Videos