5 Apr 2019 | Naureen Mahmood, Nima Ghorbani, Nikolaus F. Troje, Gerard Pons-Moll, Michael J. Black
The paper introduces AMASS, a large and diverse database of human motion that unifies 15 different optical marker-based motion capture (mocap) datasets into a common framework and parameterization. This is achieved using MoSh++, a new method that converts mocap data into realistic 3D human meshes represented by a rigged body model (SMPL). The method works for arbitrary marker sets and recovers soft-tissue dynamics and realistic hand motion. The authors evaluate MoSh++ using a new dataset of 4D body scans recorded with marker-based mocap, tuning its hyperparameters to minimize the distance between ground truth 3D scans and estimated 3D body meshes. AMASS is significantly richer than previous human motion collections, with over 40 hours of motion data, spanning 300 subjects, and 11,451 motions. The dataset is publicly available to the research community, enabling new applications in animation, visualization, and deep learning.The paper introduces AMASS, a large and diverse database of human motion that unifies 15 different optical marker-based motion capture (mocap) datasets into a common framework and parameterization. This is achieved using MoSh++, a new method that converts mocap data into realistic 3D human meshes represented by a rigged body model (SMPL). The method works for arbitrary marker sets and recovers soft-tissue dynamics and realistic hand motion. The authors evaluate MoSh++ using a new dataset of 4D body scans recorded with marker-based mocap, tuning its hyperparameters to minimize the distance between ground truth 3D scans and estimated 3D body meshes. AMASS is significantly richer than previous human motion collections, with over 40 hours of motion data, spanning 300 subjects, and 11,451 motions. The dataset is publicly available to the research community, enabling new applications in animation, visualization, and deep learning.