TexPainter: Generative Mesh Texturing with Multi-view Consistency

TexPainter: Generative Mesh Texturing with Multi-view Consistency

June 03-05, 2018 | Hongkun Zhang, Zherong Pan, Congyi Zhang, Lifeng Zhu, Xifeng Gao
TexPainter is a method for generating high-quality, multi-view consistent textures for arbitrary 3D models using pre-trained diffusion models. The key challenge is ensuring that the generated textures are consistent across different camera views. The method leverages the DDIM scheme to enforce multi-view consistency by performing joint optimization in the color space, rather than directly manipulating latent codes. This approach avoids assumptions about sequential dependencies between views and improves texture quality and consistency compared to existing methods. The method is evaluated on a series of 3D models, demonstrating superior performance in terms of texture quality and multi-view consistency. The implementation is available at https://github.com/Quantuman134/TexPainter. The paper also discusses related work in 2D/3D content generation, including classical 2D/3D content generation techniques, learnable 3D generative models, and 2D/3D diffusion models. The method is compared with other texture generation techniques, showing that it achieves better results in terms of texture quality and multi-view consistency. The paper also discusses extensions to the method, including the use of joint optimization for multiple denoising processes and the application of the method to different types of 3D models. The results show that the method produces high-quality textures with consistent appearance across different camera views. The method is evaluated using FID scores and time consumption metrics, demonstrating its effectiveness and efficiency. The paper concludes that TexPainter is a promising approach for generating high-quality, multi-view consistent textures for arbitrary 3D models.TexPainter is a method for generating high-quality, multi-view consistent textures for arbitrary 3D models using pre-trained diffusion models. The key challenge is ensuring that the generated textures are consistent across different camera views. The method leverages the DDIM scheme to enforce multi-view consistency by performing joint optimization in the color space, rather than directly manipulating latent codes. This approach avoids assumptions about sequential dependencies between views and improves texture quality and consistency compared to existing methods. The method is evaluated on a series of 3D models, demonstrating superior performance in terms of texture quality and multi-view consistency. The implementation is available at https://github.com/Quantuman134/TexPainter. The paper also discusses related work in 2D/3D content generation, including classical 2D/3D content generation techniques, learnable 3D generative models, and 2D/3D diffusion models. The method is compared with other texture generation techniques, showing that it achieves better results in terms of texture quality and multi-view consistency. The paper also discusses extensions to the method, including the use of joint optimization for multiple denoising processes and the application of the method to different types of 3D models. The results show that the method produces high-quality textures with consistent appearance across different camera views. The method is evaluated using FID scores and time consumption metrics, demonstrating its effectiveness and efficiency. The paper concludes that TexPainter is a promising approach for generating high-quality, multi-view consistent textures for arbitrary 3D models.
Reach us at info@study.space