Interactive Text-to-texture Synthesis via Unified Depth-aware Inpainting

Interactive Text-to-texture Synthesis via Unified Depth-aware Inpainting

18 Mar 2024 | Jiaxiang Tang1*, Ruijie Lu1, Xiaokang Chen1, Xiang Wen2,3, Gang Zeng1, and Ziwei Liu4
**InteX: Interactive Text-to-Texture Synthesis via Unified Depth-aware Inpainting** **Authors:** Jiaxiang Tang, Ruijie Lu, Xiaokang Chen, Xiang Wen, Gang Zeng, Ziwei Liu **Institutional Affiliations:** National Key Lab of General AI, Peking University; Zhejiang University; Skywork AI; S-Lab, Nanyang Technological University **Abstract:** Text-to-texture synthesis has emerged as a significant frontier in 3D content creation, driven by advancements in text-to-image models. However, existing methods often suffer from 3D inconsistencies and limited controllability. To address these challenges, the authors introduce InteX, an interactive text-to-texture synthesis framework. InteX features a user-friendly interface that enables flexible visualization, inpainting, erasing, and repainting. Additionally, a unified depth-aware inpainting model integrates depth information with inpainting cues, enhancing 3D consistency and improving generation speed. Extensive experiments demonstrate the effectiveness and efficiency of InteX in generating high-quality textures with smooth user interaction. **Contributions:** 1. **User-Friendly Interface:** InteX includes a graphical interface for interactive texture synthesis, allowing users to visualize and control the synthesis process. 2. **Unified Depth-Aware Inpainting Model:** This model integrates depth information with inpainting cues, reducing 3D inconsistencies and improving generation speed. 3. **Efficiency and Flexibility:** The framework significantly reduces texture generation time to approximately 30 seconds per instance, enhancing controllability and flexibility. **Keywords:** 3D Generation, Texture Synthesis **Introduction:** The paper discusses the challenges and advancements in text-to-texture synthesis, highlighting the limitations of existing methods such as 3D inconsistency and limited controllability. InteX aims to address these issues by providing a user-friendly interface and a unified depth-aware inpainting model. **Methodology:** - **Unified Depth-Aware Inpainting Prior Model:** Trained on 3D datasets, this model integrates depth information with inpainting cues to enhance 3D consistency. - **Iterative Texture Synthesis:** The method uses an iterative inpainting approach to synthesize textures on 3D surfaces, eliminating the need for optimization or multi-stage refinement processes. - **GUI for Practical Use:** A graphical user interface allows users to select camera viewpoints, erase and repaint specific regions, and change text prompts during the synthesis process. **Experiments:** - **Implementation Details:** The training and inference processes are described, including dataset filtering, model architecture, and hyperparameters. - **Effectiveness of Depth-aware Inpainting:** Experiments show that the depth-aware inpainting model produces more aligned and consistent results compared to baseline methods. - **Qualitative and Quantitative Comparisons:** The method is compared with recent state-of-the-art techniques, demonstrating superior performance**InteX: Interactive Text-to-Texture Synthesis via Unified Depth-aware Inpainting** **Authors:** Jiaxiang Tang, Ruijie Lu, Xiaokang Chen, Xiang Wen, Gang Zeng, Ziwei Liu **Institutional Affiliations:** National Key Lab of General AI, Peking University; Zhejiang University; Skywork AI; S-Lab, Nanyang Technological University **Abstract:** Text-to-texture synthesis has emerged as a significant frontier in 3D content creation, driven by advancements in text-to-image models. However, existing methods often suffer from 3D inconsistencies and limited controllability. To address these challenges, the authors introduce InteX, an interactive text-to-texture synthesis framework. InteX features a user-friendly interface that enables flexible visualization, inpainting, erasing, and repainting. Additionally, a unified depth-aware inpainting model integrates depth information with inpainting cues, enhancing 3D consistency and improving generation speed. Extensive experiments demonstrate the effectiveness and efficiency of InteX in generating high-quality textures with smooth user interaction. **Contributions:** 1. **User-Friendly Interface:** InteX includes a graphical interface for interactive texture synthesis, allowing users to visualize and control the synthesis process. 2. **Unified Depth-Aware Inpainting Model:** This model integrates depth information with inpainting cues, reducing 3D inconsistencies and improving generation speed. 3. **Efficiency and Flexibility:** The framework significantly reduces texture generation time to approximately 30 seconds per instance, enhancing controllability and flexibility. **Keywords:** 3D Generation, Texture Synthesis **Introduction:** The paper discusses the challenges and advancements in text-to-texture synthesis, highlighting the limitations of existing methods such as 3D inconsistency and limited controllability. InteX aims to address these issues by providing a user-friendly interface and a unified depth-aware inpainting model. **Methodology:** - **Unified Depth-Aware Inpainting Prior Model:** Trained on 3D datasets, this model integrates depth information with inpainting cues to enhance 3D consistency. - **Iterative Texture Synthesis:** The method uses an iterative inpainting approach to synthesize textures on 3D surfaces, eliminating the need for optimization or multi-stage refinement processes. - **GUI for Practical Use:** A graphical user interface allows users to select camera viewpoints, erase and repaint specific regions, and change text prompts during the synthesis process. **Experiments:** - **Implementation Details:** The training and inference processes are described, including dataset filtering, model architecture, and hyperparameters. - **Effectiveness of Depth-aware Inpainting:** Experiments show that the depth-aware inpainting model produces more aligned and consistent results compared to baseline methods. - **Qualitative and Quantitative Comparisons:** The method is compared with recent state-of-the-art techniques, demonstrating superior performance
Reach us at info@study.space
Understanding InTeX%3A Interactive Text-to-texture Synthesis via Unified Depth-aware Inpainting