GarmentDreamer: 3DGS Guided Garment Synthesis with Diverse Geometry and Texture Details

GarmentDreamer: 3DGS Guided Garment Synthesis with Diverse Geometry and Texture Details

20 May 2024 | BOQIAN LI, XUAN LI, YING JIANG, TIANYI XIE, FENG GAO, HUAMIN WANG, YIN YANG, and CHENFANFU JIANG
GarmentDreamer is a novel method for generating high-quality, simulation-ready 3D garment meshes from text prompts, leveraging 3D Gaussian Splatting (3DGS) for guidance. Traditional 3D garment creation is labor-intensive, requiring sketching, modeling, UV mapping, and texturing. Recent advances in diffusion-based generative models have enabled 3D garment generation from text, images, and videos, but existing methods face challenges such as multi-view inconsistency and lack of high-fidelity details. GarmentDreamer addresses these issues by using 3DGS to ensure consistent optimization in both garment deformation and texture synthesis. It introduces a novel garment augmentation module guided by normal and RGBA information, and employs implicit Neural Texture Fields (NeTF) combined with Variational Score Distillation (VSD) to generate diverse geometric and texture details. The method also includes a two-stage training process for garment geometry deformation, using masks for coarse-stage optimization and RGB renderings and normal maps for fine-stage refinement. GarmentDreamer produces high-quality textures using NeTF and VSD, resulting in simulation-ready garments with detailed textures. The method is validated through comprehensive qualitative and quantitative experiments, demonstrating superior performance compared to state-of-the-art alternatives. The project page is available at https://xuan-li.github.io/GarmentDreamerDemo/.GarmentDreamer is a novel method for generating high-quality, simulation-ready 3D garment meshes from text prompts, leveraging 3D Gaussian Splatting (3DGS) for guidance. Traditional 3D garment creation is labor-intensive, requiring sketching, modeling, UV mapping, and texturing. Recent advances in diffusion-based generative models have enabled 3D garment generation from text, images, and videos, but existing methods face challenges such as multi-view inconsistency and lack of high-fidelity details. GarmentDreamer addresses these issues by using 3DGS to ensure consistent optimization in both garment deformation and texture synthesis. It introduces a novel garment augmentation module guided by normal and RGBA information, and employs implicit Neural Texture Fields (NeTF) combined with Variational Score Distillation (VSD) to generate diverse geometric and texture details. The method also includes a two-stage training process for garment geometry deformation, using masks for coarse-stage optimization and RGB renderings and normal maps for fine-stage refinement. GarmentDreamer produces high-quality textures using NeTF and VSD, resulting in simulation-ready garments with detailed textures. The method is validated through comprehensive qualitative and quantitative experiments, demonstrating superior performance compared to state-of-the-art alternatives. The project page is available at https://xuan-li.github.io/GarmentDreamerDemo/.
Reach us at info@study.space
[slides and audio] GarmentDreamer%3A 3DGS Guided Garment Synthesis with Diverse Geometry and Texture Details