HAHA: Highly Articulated Gaussian Human Avatars with Textured Mesh Prior

HAHA: Highly Articulated Gaussian Human Avatars with Textured Mesh Prior

1 Apr 2024 | David Svitov, Pietro Morerio, Lourdes Agapito, Alessio Del Bue
HAHA is a novel method for generating animatable human avatars from monocular video input. The approach combines Gaussian splatting with a textured mesh to achieve efficient and high-fidelity rendering. The method learns to use Gaussian splatting only where necessary, such as in hair and out-of-mesh clothing, reducing the number of Gaussians required and minimizing rendering artifacts. This allows for the animation of small body parts like fingers that are traditionally overlooked. HAHA demonstrates competitive reconstruction quality on the SnapshotPeople dataset while using fewer Gaussians than state-of-the-art methods. It outperforms previous methods on the X-Humans dataset in both quantitative and qualitative terms. The method uses a textured mesh to represent areas not covered by Gaussians, significantly reducing the number of Gaussians needed for the entire avatar. HAHA's approach allows for efficient rendering and storage, with a reduction in the number of Gaussians by up to three times. The method uses a three-stage pipeline to learn Gaussian and textured mesh representations, and then removes unnecessary Gaussians in an unsupervised manner. The method employs regularization techniques to ensure high-quality rendering while minimizing the number of Gaussians. HAHA is trained using input video frames with SMPL-X fits, without additional labels. The method demonstrates superior performance on both quantitative and qualitative metrics, particularly in handling novel poses and views. The method's efficiency and effectiveness make it suitable for applications requiring high-fidelity human avatars with minimal computational resources.HAHA is a novel method for generating animatable human avatars from monocular video input. The approach combines Gaussian splatting with a textured mesh to achieve efficient and high-fidelity rendering. The method learns to use Gaussian splatting only where necessary, such as in hair and out-of-mesh clothing, reducing the number of Gaussians required and minimizing rendering artifacts. This allows for the animation of small body parts like fingers that are traditionally overlooked. HAHA demonstrates competitive reconstruction quality on the SnapshotPeople dataset while using fewer Gaussians than state-of-the-art methods. It outperforms previous methods on the X-Humans dataset in both quantitative and qualitative terms. The method uses a textured mesh to represent areas not covered by Gaussians, significantly reducing the number of Gaussians needed for the entire avatar. HAHA's approach allows for efficient rendering and storage, with a reduction in the number of Gaussians by up to three times. The method uses a three-stage pipeline to learn Gaussian and textured mesh representations, and then removes unnecessary Gaussians in an unsupervised manner. The method employs regularization techniques to ensure high-quality rendering while minimizing the number of Gaussians. HAHA is trained using input video frames with SMPL-X fits, without additional labels. The method demonstrates superior performance on both quantitative and qualitative metrics, particularly in handling novel poses and views. The method's efficiency and effectiveness make it suitable for applications requiring high-fidelity human avatars with minimal computational resources.
Reach us at info@study.space
Understanding HAHA%3A Highly Articulated Gaussian Human Avatars with Textured Mesh Prior