14 Jun 2024 | Jiacong Xu, Yiqun Mei, Vishal M. Patel
Wild-GS is a novel method for real-time novel view synthesis from unconstrained photo collections. It improves the robustness of 3D Gaussian Splatting (3DGS) without sacrificing its efficiency. Wild-GS decomposes the appearance of each 3D Gaussian into global and local components, incorporating material properties, lighting, and camera settings. It aligns pixel appearance features with corresponding local Gaussians using triplane representations, enabling accurate appearance transfer to 3D space. The method also uses 2D visibility maps and depth regularization to handle transient objects and ensure geometric consistency. Extensive experiments show that Wild-GS achieves state-of-the-art rendering performance and the highest efficiency in both training and inference. It also supports appearance transfer from arbitrary images, demonstrating its versatility in handling real-world scenarios with varying appearances and transient occlusions.Wild-GS is a novel method for real-time novel view synthesis from unconstrained photo collections. It improves the robustness of 3D Gaussian Splatting (3DGS) without sacrificing its efficiency. Wild-GS decomposes the appearance of each 3D Gaussian into global and local components, incorporating material properties, lighting, and camera settings. It aligns pixel appearance features with corresponding local Gaussians using triplane representations, enabling accurate appearance transfer to 3D space. The method also uses 2D visibility maps and depth regularization to handle transient objects and ensure geometric consistency. Extensive experiments show that Wild-GS achieves state-of-the-art rendering performance and the highest efficiency in both training and inference. It also supports appearance transfer from arbitrary images, demonstrating its versatility in handling real-world scenarios with varying appearances and transient occlusions.