20 May 2024 | BOQIAN LI1,2*, XUAN LI1*, YING JIANG1,3*, TIANYI XIE1, FENG GAO4, HUAMIN WANG5, YIN YANG2, and CHENFANFU JIANG1
GarmentDreamer is a novel 3D garment synthesis framework that generates high-quality, simulation-ready textured garment meshes from text prompts. Traditional 3D garment creation is labor-intensive and time-consuming, involving sketching, modeling, UV mapping, and texturing. Recent advances in diffusion-based generative models have enabled new possibilities for 3D garment generation, but existing methods often suffer from inconsistencies among multi-view images or require additional processes to separate cloth from the underlying human model.
GarmentDreamer leverages 3D Gaussian Splitting (3DGS) as guidance to ensure consistent optimization in both garment deformation and texture synthesis. The method introduces a novel garment augmentation module guided by normal and RGBA information, and employs implicit Neural Texture Fields (NeTF) combined with Variational Score Distillation (VSD) to generate diverse geometric and texture details. Comprehensive qualitative and quantitative experiments demonstrate the superior performance of GarmentDreamer over state-of-the-art alternatives.
The key contributions of GarmentDreamer include:
1. A novel 3D garment synthesis method using diffusion models with 3DGS as reference.
2. A new garment deformer module using normal-based and RGBA-based guidance in coarse-to-fine mesh refinement stages.
3. The use of NeTF reconstructed and fine-tuned by VSD loss to generate high-quality garment textures.
4. Comprehensive experiments showing superior performance compared to prior methods.
GarmentDreamer addresses the challenges of generating simulation-ready, non-watertight garments with detailed geometries and textures, making it suitable for applications such as fashion design, virtual try-on, gaming, animation, and virtual reality.GarmentDreamer is a novel 3D garment synthesis framework that generates high-quality, simulation-ready textured garment meshes from text prompts. Traditional 3D garment creation is labor-intensive and time-consuming, involving sketching, modeling, UV mapping, and texturing. Recent advances in diffusion-based generative models have enabled new possibilities for 3D garment generation, but existing methods often suffer from inconsistencies among multi-view images or require additional processes to separate cloth from the underlying human model.
GarmentDreamer leverages 3D Gaussian Splitting (3DGS) as guidance to ensure consistent optimization in both garment deformation and texture synthesis. The method introduces a novel garment augmentation module guided by normal and RGBA information, and employs implicit Neural Texture Fields (NeTF) combined with Variational Score Distillation (VSD) to generate diverse geometric and texture details. Comprehensive qualitative and quantitative experiments demonstrate the superior performance of GarmentDreamer over state-of-the-art alternatives.
The key contributions of GarmentDreamer include:
1. A novel 3D garment synthesis method using diffusion models with 3DGS as reference.
2. A new garment deformer module using normal-based and RGBA-based guidance in coarse-to-fine mesh refinement stages.
3. The use of NeTF reconstructed and fine-tuned by VSD loss to generate high-quality garment textures.
4. Comprehensive experiments showing superior performance compared to prior methods.
GarmentDreamer addresses the challenges of generating simulation-ready, non-watertight garments with detailed geometries and textures, making it suitable for applications such as fashion design, virtual try-on, gaming, animation, and virtual reality.