Garment3DGen: 3D Garment Stylistization and Texture Generation

Garment3DGen: 3D Garment Stylistization and Texture Generation

13 Aug 2024 | Nikolaos Sarafianos, Tuur Stuyck, Xiaoyu Xiang, Yilei Li, Jovan Popovic, Rakesh Ranjan
Garment3DGen is a method that generates 3D textured garments directly from images or text, enabling simulation-ready assets for applications like VR interaction and cloth simulation. Given a base mesh and an image, it performs topology-preserving deformations to match the image guidance, producing high-quality 3D garments with realistic textures. The method leverages diffusion-based image-to-3D techniques to generate a coarse 3D geometry, which is then used as pseudo ground-truth for mesh deformation optimization. Additional losses ensure the base mesh deforms towards the target while preserving mesh quality and topology. A texture estimation module generates high-fidelity UV textures that are globally and locally consistent with the input. Garment3DGen outperforms existing methods in terms of both embedding and perceptual similarity with the input image, and produces simulation-ready garments that can be used for downstream tasks. The method supports both image and text inputs, and can generate garments from sketches. It also enables body-garment co-optimization, allowing garments to be scaled and fitted to parametric body models. The approach is efficient, with a runtime of about 5 minutes on a single H100 GPU. It has been tested on various datasets and compared with other methods, showing superior performance in generating realistic and simulation-ready garments. The method has applications in VR, cloth simulation, and garment design.Garment3DGen is a method that generates 3D textured garments directly from images or text, enabling simulation-ready assets for applications like VR interaction and cloth simulation. Given a base mesh and an image, it performs topology-preserving deformations to match the image guidance, producing high-quality 3D garments with realistic textures. The method leverages diffusion-based image-to-3D techniques to generate a coarse 3D geometry, which is then used as pseudo ground-truth for mesh deformation optimization. Additional losses ensure the base mesh deforms towards the target while preserving mesh quality and topology. A texture estimation module generates high-fidelity UV textures that are globally and locally consistent with the input. Garment3DGen outperforms existing methods in terms of both embedding and perceptual similarity with the input image, and produces simulation-ready garments that can be used for downstream tasks. The method supports both image and text inputs, and can generate garments from sketches. It also enables body-garment co-optimization, allowing garments to be scaled and fitted to parametric body models. The approach is efficient, with a runtime of about 5 minutes on a single H100 GPU. It has been tested on various datasets and compared with other methods, showing superior performance in generating realistic and simulation-ready garments. The method has applications in VR, cloth simulation, and garment design.
Reach us at info@study.space