LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free Environment

LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free Environment

27 Feb 2024 | Yiming Ren, Xiao Han, Chengfeng Zhao, Jingya Wang, Lan Xu, Jingyi Yu, Yuexin Ma
LiveHPS is a novel single-LiDAR-based approach for estimating 3D human pose and shape in large-scale free environments. The method predicts full human SMPL parameters (pose, shape, and translation) from consecutive LiDAR point clouds, performing well in challenging poses and occlusion situations. It addresses the distribution-varying effect of LiDAR point clouds and exploits temporal-spatial geometric and dynamic information to solve occlusion and noise disturbance. LiveHPS is efficient and produces high-quality outputs, making it suitable for real-world applications. A large-scale human motion dataset, FreeMotion, is introduced, containing multi-modal and multi-view data from calibrated LiDARs, cameras, and IMUs. Extensive experiments on FreeMotion and other public datasets demonstrate the state-of-the-art performance and robustness of LiveHPS. The method includes an adaptive vertex-guided distillation module, consecutive pose optimizer, and skeleton-aware translation solver to handle distribution-varied, incomplete, and noisy LiDAR point clouds. The dataset provides full SMPL parameter annotations, enabling further research in-the-wild HPS. The method is evaluated on various datasets, showing superior performance in pose, shape, and translation estimation, even in challenging scenarios with occlusions and noise. LiveHPS is robust to noise and can handle long-distance scenarios and severe occlusions. The method is practical for real-world applications, capturing human motion in large-scale scenes with real-time performance up to 45 fps. The paper concludes that LiveHPS is a significant contribution to in-the-wild HPS research, demonstrating robustness and effectiveness in large-scale free environments.LiveHPS is a novel single-LiDAR-based approach for estimating 3D human pose and shape in large-scale free environments. The method predicts full human SMPL parameters (pose, shape, and translation) from consecutive LiDAR point clouds, performing well in challenging poses and occlusion situations. It addresses the distribution-varying effect of LiDAR point clouds and exploits temporal-spatial geometric and dynamic information to solve occlusion and noise disturbance. LiveHPS is efficient and produces high-quality outputs, making it suitable for real-world applications. A large-scale human motion dataset, FreeMotion, is introduced, containing multi-modal and multi-view data from calibrated LiDARs, cameras, and IMUs. Extensive experiments on FreeMotion and other public datasets demonstrate the state-of-the-art performance and robustness of LiveHPS. The method includes an adaptive vertex-guided distillation module, consecutive pose optimizer, and skeleton-aware translation solver to handle distribution-varied, incomplete, and noisy LiDAR point clouds. The dataset provides full SMPL parameter annotations, enabling further research in-the-wild HPS. The method is evaluated on various datasets, showing superior performance in pose, shape, and translation estimation, even in challenging scenarios with occlusions and noise. LiveHPS is robust to noise and can handle long-distance scenarios and severe occlusions. The method is practical for real-world applications, capturing human motion in large-scale scenes with real-time performance up to 45 fps. The paper concludes that LiveHPS is a significant contribution to in-the-wild HPS research, demonstrating robustness and effectiveness in large-scale free environments.
Reach us at info@study.space