4D-Rotor Gaussian Splatting: Towards Efficient Novel View Synthesis for Dynamic Scenes

4D-Rotor Gaussian Splatting: Towards Efficient Novel View Synthesis for Dynamic Scenes

July 27-August 1, 2024 | Yuanxing Duan, Fangyin Wei, Qiyu Dai, Yuhang He, Wenzheng Chen, Baoquan Chen
This paper introduces 4D-Rotor Gaussian Splatting (4DRotorGS), a novel method for efficient novel view synthesis in dynamic scenes. The method extends 3D Gaussian Splatting (3DGS) to 4D by representing dynamic scenes with anisotropic 4D XYZT Gaussians, enabling the modeling of complex dynamics and fine details. The 4D Gaussians are represented using rotors, which allow for spatial-temporal separable rotation and provide a flexible representation for both static and dynamic scenes. The method achieves real-time rendering speeds of up to 277 FPS on an RTX 3090 GPU and 583 FPS on an RTX 4090 GPU. The proposed method outperforms existing methods in both rendering quality and inference speed, as demonstrated through extensive experiments on two datasets, including synthetic monocular videos and realistic multi-camera videos. The method introduces two new regularization terms: entropy loss and 4D consistency loss, which help stabilize the training process and improve the quality of dynamic reconstruction. The results show that 4DRotorGS achieves superior performance in terms of PSNR and rendering speed compared to existing methods. The method is implemented in a highly optimized CUDA framework, enabling fast training and inference. The paper also discusses related work, including optimization-based novel-view synthesis methods, dynamic NVS methods, and Gaussian-based NVS methods. The results demonstrate that 4DRotorGS is a generalizable form of 3DGS, capable of modeling both static and dynamic scenes. The method is evaluated on the Plenoptic Video Dataset and the D-NeRF Dataset, showing significant improvements in rendering quality and speed. The paper concludes that 4DRotorGS is a promising approach for efficient novel view synthesis in dynamic scenes.This paper introduces 4D-Rotor Gaussian Splatting (4DRotorGS), a novel method for efficient novel view synthesis in dynamic scenes. The method extends 3D Gaussian Splatting (3DGS) to 4D by representing dynamic scenes with anisotropic 4D XYZT Gaussians, enabling the modeling of complex dynamics and fine details. The 4D Gaussians are represented using rotors, which allow for spatial-temporal separable rotation and provide a flexible representation for both static and dynamic scenes. The method achieves real-time rendering speeds of up to 277 FPS on an RTX 3090 GPU and 583 FPS on an RTX 4090 GPU. The proposed method outperforms existing methods in both rendering quality and inference speed, as demonstrated through extensive experiments on two datasets, including synthetic monocular videos and realistic multi-camera videos. The method introduces two new regularization terms: entropy loss and 4D consistency loss, which help stabilize the training process and improve the quality of dynamic reconstruction. The results show that 4DRotorGS achieves superior performance in terms of PSNR and rendering speed compared to existing methods. The method is implemented in a highly optimized CUDA framework, enabling fast training and inference. The paper also discusses related work, including optimization-based novel-view synthesis methods, dynamic NVS methods, and Gaussian-based NVS methods. The results demonstrate that 4DRotorGS is a generalizable form of 3DGS, capable of modeling both static and dynamic scenes. The method is evaluated on the Plenoptic Video Dataset and the D-NeRF Dataset, showing significant improvements in rendering quality and speed. The paper concludes that 4DRotorGS is a promising approach for efficient novel view synthesis in dynamic scenes.
Reach us at info@study.space
[slides and audio] 4D-Rotor Gaussian Splatting%3A Towards Efficient Novel View Synthesis for Dynamic Scenes