Garment3DGen is a novel method for synthesizing 3D garment assets from a base mesh and a single input image, enabling users to generate textured 3D clothing based on real or synthetic images. The method leverages recent advancements in image-to-3D diffusion models to generate 3D garment geometries, which are then optimized to match the input image guidance while preserving the topology and structure of the base mesh. The generated assets can be draped and simulated on human bodies, making them suitable for applications such as cloth and hand-garment interaction in VR. Key contributions include:
1. **Deformation-Based Approach**: The method uses mesh-based deformations to match the input image guidance, ensuring that the generated garments preserve the structure and topology of the base mesh.
2. **3D Supervisions**: The method incorporates 3D supervisions in the form of cross-domain diffusion models to provide strong supervision signals during the deformation process.
3. **Texture Estimation**: A texture estimation module generates high-fidelity UV textures that match the input image, ensuring that the generated 3D assets are visually accurate.
4. **Body-Garment Co-Optimization**: The method includes a body-garment co-optimization framework to fit the generated 3D garments to parametric body models, enabling physics-based cloth simulation and interactive experiences.
The method has been evaluated on various datasets and compared with several baselines, demonstrating superior performance in terms of mesh quality, texture detail, and downstream task applicability. Garment3DGen offers a frictionless experience, allowing users to generate high-quality 3D garments without manual intervention, and is publicly available for research and development.Garment3DGen is a novel method for synthesizing 3D garment assets from a base mesh and a single input image, enabling users to generate textured 3D clothing based on real or synthetic images. The method leverages recent advancements in image-to-3D diffusion models to generate 3D garment geometries, which are then optimized to match the input image guidance while preserving the topology and structure of the base mesh. The generated assets can be draped and simulated on human bodies, making them suitable for applications such as cloth and hand-garment interaction in VR. Key contributions include:
1. **Deformation-Based Approach**: The method uses mesh-based deformations to match the input image guidance, ensuring that the generated garments preserve the structure and topology of the base mesh.
2. **3D Supervisions**: The method incorporates 3D supervisions in the form of cross-domain diffusion models to provide strong supervision signals during the deformation process.
3. **Texture Estimation**: A texture estimation module generates high-fidelity UV textures that match the input image, ensuring that the generated 3D assets are visually accurate.
4. **Body-Garment Co-Optimization**: The method includes a body-garment co-optimization framework to fit the generated 3D garments to parametric body models, enabling physics-based cloth simulation and interactive experiences.
The method has been evaluated on various datasets and compared with several baselines, demonstrating superior performance in terms of mesh quality, texture detail, and downstream task applicability. Garment3DGen offers a frictionless experience, allowing users to generate high-quality 3D garments without manual intervention, and is publicly available for research and development.