4D-DRESS: A 4D Dataset of Real-World Human Clothing With Semantic Annotations

4D-DRESS: A 4D Dataset of Real-World Human Clothing With Semantic Annotations

29 Apr 2024 | Wenbo Wang, Hsuan-I Ho, Chen Guo, Boxiang Rong, Artur Grigorev, Jie Song, Juan Jose Zarate, Otmar Hilliges
4D-DRESS is the first real-world 4D dataset of human clothing with semantic annotations. It captures 64 human outfits in 520 motion sequences, totaling 78,000 textured scans. The dataset includes high-quality 4D textured scans, vertex-level semantic labels for clothing types, and garment meshes along with SMPL(-X) body meshes. It provides realistic and challenging data that complements synthetic sources, enabling advancements in research of lifelike human clothing. The dataset was created using a semi-automatic 4D human parsing pipeline, which combines human-in-the-loop processes with automation to accurately label 4D scans in various garments and body movements. The dataset includes 520 motion sequences, each with 150 frames at 30 fps, and contains 64 real-world human outfits. Each frame consists of multi-view images, an 80,000-face triangle mesh with vertex annotations, and a 1,000-resolution texture map. The dataset also provides each garment with its canonical template for clothing simulation studies. The dataset was recorded using 32 participants, with a variety of clothing items and dynamic motions. The dataset includes challenging clothing deformations, with distances from garments to registered SMPL body surfaces up to 14.76 cm. The dataset serves as an ideal benchmark for various computer vision and graphics tasks, including clothing simulation, reconstruction, and human parsing. It provides a comprehensive evaluation of existing methods, demonstrating the effectiveness of 4D-DRESS in realistic human clothing research. The dataset is expected to support future research in realistic animated digital avatars.4D-DRESS is the first real-world 4D dataset of human clothing with semantic annotations. It captures 64 human outfits in 520 motion sequences, totaling 78,000 textured scans. The dataset includes high-quality 4D textured scans, vertex-level semantic labels for clothing types, and garment meshes along with SMPL(-X) body meshes. It provides realistic and challenging data that complements synthetic sources, enabling advancements in research of lifelike human clothing. The dataset was created using a semi-automatic 4D human parsing pipeline, which combines human-in-the-loop processes with automation to accurately label 4D scans in various garments and body movements. The dataset includes 520 motion sequences, each with 150 frames at 30 fps, and contains 64 real-world human outfits. Each frame consists of multi-view images, an 80,000-face triangle mesh with vertex annotations, and a 1,000-resolution texture map. The dataset also provides each garment with its canonical template for clothing simulation studies. The dataset was recorded using 32 participants, with a variety of clothing items and dynamic motions. The dataset includes challenging clothing deformations, with distances from garments to registered SMPL body surfaces up to 14.76 cm. The dataset serves as an ideal benchmark for various computer vision and graphics tasks, including clothing simulation, reconstruction, and human parsing. It provides a comprehensive evaluation of existing methods, demonstrating the effectiveness of 4D-DRESS in realistic human clothing research. The dataset is expected to support future research in realistic animated digital avatars.
Reach us at info@study.space