Gaussian Splatting in Style

Gaussian Splatting in Style

6 Sep 2024 | Abhishek Saroha, Mariia Gladkova, Cecilia Curreli, Dominik Muhle, Tarun Yenamandra, and Daniel Cremers
The paper "Gaussian Splatting in Style" by Abhishek Saroha et al. presents a novel method for 3D scene stylization, which extends the concept of neural style transfer to 3D scenes. The key challenge in this task is to maintain uniformity across multiple views while stylizing a 3D scene. Unlike previous methods that require training a 3D model for each new style, the proposed method uses a neural network conditioned on a style image to generate high-quality, real-time stylized views. The underlying 3D scene representation is based on 3D Gaussian splatting (3DGS), which stores scene information in the form of 3D Gaussians. This explicit representation ensures geometric consistency and allows for fast training and rendering, making it suitable for applications like augmented or virtual reality. The method involves a 3D Color module that predicts new colors for each 3D Gaussian based on the style image, and a 2D Stylization module that uses AdaIN to guide the color prediction. During training, the scene is learned in conjunction with the 3D Color module, ensuring that the geometry and style are consistent. The method outperforms existing baselines in both short-term and long-term consistency metrics, demonstrating superior visual quality and efficiency in generating stylized views. The paper also includes ablation studies to validate the effectiveness of the proposed method, showing that joint training of the 3D Gaussians and the 3D Color module, as well as the use of pre-trained 3D Gaussians, significantly improve the results. The method is evaluated on various real-world datasets, including LLFF and Tanks and Temples (TnT), and achieves state-of-the-art performance in terms of visual quality and runtime efficiency.The paper "Gaussian Splatting in Style" by Abhishek Saroha et al. presents a novel method for 3D scene stylization, which extends the concept of neural style transfer to 3D scenes. The key challenge in this task is to maintain uniformity across multiple views while stylizing a 3D scene. Unlike previous methods that require training a 3D model for each new style, the proposed method uses a neural network conditioned on a style image to generate high-quality, real-time stylized views. The underlying 3D scene representation is based on 3D Gaussian splatting (3DGS), which stores scene information in the form of 3D Gaussians. This explicit representation ensures geometric consistency and allows for fast training and rendering, making it suitable for applications like augmented or virtual reality. The method involves a 3D Color module that predicts new colors for each 3D Gaussian based on the style image, and a 2D Stylization module that uses AdaIN to guide the color prediction. During training, the scene is learned in conjunction with the 3D Color module, ensuring that the geometry and style are consistent. The method outperforms existing baselines in both short-term and long-term consistency metrics, demonstrating superior visual quality and efficiency in generating stylized views. The paper also includes ablation studies to validate the effectiveness of the proposed method, showing that joint training of the 3D Gaussians and the 3D Color module, as well as the use of pre-trained 3D Gaussians, significantly improve the results. The method is evaluated on various real-world datasets, including LLFF and Tanks and Temples (TnT), and achieves state-of-the-art performance in terms of visual quality and runtime efficiency.
Reach us at info@study.space
Understanding Gaussian Splatting in Style