Motion-aware 3D Gaussian Splatting for Efficient Dynamic Scene Reconstruction

Motion-aware 3D Gaussian Splatting for Efficient Dynamic Scene Reconstruction

18 Mar 2024 | Zhiyang Guo, Wengang Zhou, Li Li, Min Wang, Houqiang Li
The paper "Motion-aware 3D Gaussian Splatting for Efficient Dynamic Scene Reconstruction" addresses the challenge of dynamic scene reconstruction using 3D Gaussian Splatting (3DGS). The authors propose a novel framework that leverages optical flow to enhance different paradigms of dynamic 3DGS, including iterative and deformation-based approaches. The key contributions include: 1. **Cross-Dimensional Motion Correspondence**: Establishing a correspondence between 3D Gaussian movements and pixel-level optical flow to align scene flow with 2D optical flow. 2. **Flow Augmentation**: Introducing a new flow loss that considers uncertainty in optical flow predictions, using KL-Divergence to align projected and ground-truth flows. 3. **Dynamic Awareness**: Leveraging the flow prior to generate a dynamic map/mask, which guides the optimization towards regions with larger movements. 4. **Transient-aware Deformation Auxiliary**: For the deformation-based paradigm, a motion injector is used to explicitly inject transient information into the time-variant voxel features, enhancing the deformation process. The proposed method is evaluated on multiple datasets, including PanopticSports, Neural 3D Video, and HyperNeRF, showing significant improvements over existing methods in both multi-view and monocular scenes. The experiments demonstrate that the method effectively introduces motion information, leading to more accurate and efficient dynamic scene reconstruction. The paper also discusses the efficiency and robustness of the proposed method, highlighting its potential for future research in dynamic 3D scene reconstruction.The paper "Motion-aware 3D Gaussian Splatting for Efficient Dynamic Scene Reconstruction" addresses the challenge of dynamic scene reconstruction using 3D Gaussian Splatting (3DGS). The authors propose a novel framework that leverages optical flow to enhance different paradigms of dynamic 3DGS, including iterative and deformation-based approaches. The key contributions include: 1. **Cross-Dimensional Motion Correspondence**: Establishing a correspondence between 3D Gaussian movements and pixel-level optical flow to align scene flow with 2D optical flow. 2. **Flow Augmentation**: Introducing a new flow loss that considers uncertainty in optical flow predictions, using KL-Divergence to align projected and ground-truth flows. 3. **Dynamic Awareness**: Leveraging the flow prior to generate a dynamic map/mask, which guides the optimization towards regions with larger movements. 4. **Transient-aware Deformation Auxiliary**: For the deformation-based paradigm, a motion injector is used to explicitly inject transient information into the time-variant voxel features, enhancing the deformation process. The proposed method is evaluated on multiple datasets, including PanopticSports, Neural 3D Video, and HyperNeRF, showing significant improvements over existing methods in both multi-view and monocular scenes. The experiments demonstrate that the method effectively introduces motion information, leading to more accurate and efficient dynamic scene reconstruction. The paper also discusses the efficiency and robustness of the proposed method, highlighting its potential for future research in dynamic 3D scene reconstruction.
Reach us at info@study.space