15 Mar 2024 | Tian-Xing Xu¹, Wenbo Hu²†, Yu-Kun Lai³, Ying Shan², and Song-Hai Zhang¹†
Texture-GS is a novel method that disentangles geometry and texture for 3D Gaussian Splatting (3D-GS), enabling flexible appearance editing. The method represents the view-independent appearance as a 2D texture map, allowing for real-time texture swapping and other editing operations. It introduces a texture mapping module that includes a UV mapping MLP to learn UV coordinates for 3D Gaussian centers, a local Taylor expansion to approximate UV coordinates for ray-Gaussian intersections, and a learnable texture to capture fine-grained appearance. The method achieves high-fidelity texture reconstruction and real-time rendering on consumer-level devices, such as a single RTX 2080 Ti GPU. Experiments on the DTU dataset show that Texture-GS outperforms existing methods in novel view synthesis, global texture swapping, and local appearance editing, achieving an average rendering speed of 58 FPS. The method also addresses challenges in UV mapping by using a Taylor expansion to efficiently approximate UV coordinates and ensures smooth texture mapping. The method is effective for various applications, including texture painting and shadow-preserving texture swapping, and demonstrates potential for extending to a wide range of computing platforms. However, it has limitations, such as struggles with objects that have thin plates or holes due to the limited representational power of the UV mapping MLP.Texture-GS is a novel method that disentangles geometry and texture for 3D Gaussian Splatting (3D-GS), enabling flexible appearance editing. The method represents the view-independent appearance as a 2D texture map, allowing for real-time texture swapping and other editing operations. It introduces a texture mapping module that includes a UV mapping MLP to learn UV coordinates for 3D Gaussian centers, a local Taylor expansion to approximate UV coordinates for ray-Gaussian intersections, and a learnable texture to capture fine-grained appearance. The method achieves high-fidelity texture reconstruction and real-time rendering on consumer-level devices, such as a single RTX 2080 Ti GPU. Experiments on the DTU dataset show that Texture-GS outperforms existing methods in novel view synthesis, global texture swapping, and local appearance editing, achieving an average rendering speed of 58 FPS. The method also addresses challenges in UV mapping by using a Taylor expansion to efficiently approximate UV coordinates and ensures smooth texture mapping. The method is effective for various applications, including texture painting and shadow-preserving texture swapping, and demonstrates potential for extending to a wide range of computing platforms. However, it has limitations, such as struggles with objects that have thin plates or holes due to the limited representational power of the UV mapping MLP.