14 May 2024 | FENG-LIN LIU, HONGBO FU, YU-KUN LAI, LIN GAO
SketchDream is a novel method for generating and editing 3D content from 2D sketches and text prompts. The system addresses the challenges of 2D-to-3D translation ambiguity and multi-modal condition integration, enabling high-quality 3D content generation and detailed local editing. Key contributions include:
1. **Sketch-based Multi-View Image Generation**: Utilizes a diffusion model to generate depth maps from sketches, which are then used to warp the sketch into 3D space. A 3D ControlNet ensures spatial correspondence and 3D consistency across multiple views.
2. **3D Content Generation**: Combines the depth-guided warping strategy with a NeRF (Neural Radiance Fields) optimization to generate realistic 3D models. The method uses Score Distribution Sampling (SDS) to optimize the 3D representation, ensuring high-quality results.
3. **Sketch-based 3D Editing**: Supports local editing of generated or reconstructed 3D models. A two-stage editing framework is proposed, where the coarse stage generates initial editing results, and the fine stage refines the details while preserving unedited regions.
4. **Evaluation**: Extensive experiments demonstrate that SketchDream outperforms existing methods in terms of quality, detail control, and user satisfaction. The method generates high-quality 3D content with accurate geometry and realistic appearance, and supports detailed local editing with natural interactions between components.
The paper also includes a comprehensive literature review, related work, and a detailed methodology section, providing a thorough understanding of the background and technical details of SketchDream.SketchDream is a novel method for generating and editing 3D content from 2D sketches and text prompts. The system addresses the challenges of 2D-to-3D translation ambiguity and multi-modal condition integration, enabling high-quality 3D content generation and detailed local editing. Key contributions include:
1. **Sketch-based Multi-View Image Generation**: Utilizes a diffusion model to generate depth maps from sketches, which are then used to warp the sketch into 3D space. A 3D ControlNet ensures spatial correspondence and 3D consistency across multiple views.
2. **3D Content Generation**: Combines the depth-guided warping strategy with a NeRF (Neural Radiance Fields) optimization to generate realistic 3D models. The method uses Score Distribution Sampling (SDS) to optimize the 3D representation, ensuring high-quality results.
3. **Sketch-based 3D Editing**: Supports local editing of generated or reconstructed 3D models. A two-stage editing framework is proposed, where the coarse stage generates initial editing results, and the fine stage refines the details while preserving unedited regions.
4. **Evaluation**: Extensive experiments demonstrate that SketchDream outperforms existing methods in terms of quality, detail control, and user satisfaction. The method generates high-quality 3D content with accurate geometry and realistic appearance, and supports detailed local editing with natural interactions between components.
The paper also includes a comprehensive literature review, related work, and a detailed methodology section, providing a thorough understanding of the background and technical details of SketchDream.