5 Apr 2019 | Naureen Mahmood, Nima Ghorbani, Nikolaus F. Troje, Gerard Pons-Moll, Michael J. Black
AMASS is a large and diverse database of human motion that unifies 15 different optical marker-based motion capture (mocap) datasets. It represents these datasets within a common framework and parameterization, enabling the creation of a unified database for animation, visualization, and training deep learning models. The database includes over 40 hours of motion data, spanning more than 300 subjects and over 11,000 motions. It is publicly available at http://amass.is.tue.mpg.de.
The AMASS dataset was created using a new method called MoSh++, which converts mocap data into realistic 3D human meshes represented by a rigged body model. MoSh++ uses the SMPL model, which is widely used and provides a standard skeletal representation as well as a fully rigged surface mesh. The method works for arbitrary marker sets, while recovering soft-tissue dynamics and realistic hand motion. The dataset was evaluated using a new dataset of 4D body scans that are jointly recorded with marker-based mocap.
The AMASS dataset is significantly richer than previous human motion collections, having more than 40 hours of motion data, spanning over 300 subjects, more than 11,000 motions, and will be publicly available to the research community. The dataset includes a wide range of motions, including those with articulated hands and soft-tissue dynamics. The dataset is designed to be useful for animation, visualization, and generating training data for deep learning. It provides a consistent representation of human motion that can be adapted to new problems. The dataset includes full 3D human meshes, which are useful for many tasks, including generating synthetic training for computer vision tasks.AMASS is a large and diverse database of human motion that unifies 15 different optical marker-based motion capture (mocap) datasets. It represents these datasets within a common framework and parameterization, enabling the creation of a unified database for animation, visualization, and training deep learning models. The database includes over 40 hours of motion data, spanning more than 300 subjects and over 11,000 motions. It is publicly available at http://amass.is.tue.mpg.de.
The AMASS dataset was created using a new method called MoSh++, which converts mocap data into realistic 3D human meshes represented by a rigged body model. MoSh++ uses the SMPL model, which is widely used and provides a standard skeletal representation as well as a fully rigged surface mesh. The method works for arbitrary marker sets, while recovering soft-tissue dynamics and realistic hand motion. The dataset was evaluated using a new dataset of 4D body scans that are jointly recorded with marker-based mocap.
The AMASS dataset is significantly richer than previous human motion collections, having more than 40 hours of motion data, spanning over 300 subjects, more than 11,000 motions, and will be publicly available to the research community. The dataset includes a wide range of motions, including those with articulated hands and soft-tissue dynamics. The dataset is designed to be useful for animation, visualization, and generating training data for deep learning. It provides a consistent representation of human motion that can be adapted to new problems. The dataset includes full 3D human meshes, which are useful for many tasks, including generating synthetic training for computer vision tasks.