18 Mar 2024 | Zhiyang Guo, Wengang Zhou, Li Li, Min Wang, and Houqiang Li
This paper proposes a motion-aware enhancement framework for dynamic scene reconstruction using 3D Gaussian Splatting (3DGS). The framework leverages optical flow to improve the performance of dynamic 3DGS by incorporating motion cues into the modeling process. The key contributions include establishing a correspondence between 3D Gaussian movements and pixel-level flows, introducing a flow augmentation method with uncertainty-aware loss, and proposing a transient-aware deformation auxiliary module for deformation-based 3DGS. The framework is validated through extensive experiments on multi-view and monocular scenes, demonstrating significant improvements in rendering quality and efficiency compared to existing methods. The method effectively addresses the challenges of dynamic scene reconstruction, including motion ambiguities and model redundancy, by utilizing motion information from optical flow. The framework is shown to be effective for both iterative and deformation-based paradigms of dynamic 3DGS, achieving state-of-the-art results in dynamic scene reconstruction.This paper proposes a motion-aware enhancement framework for dynamic scene reconstruction using 3D Gaussian Splatting (3DGS). The framework leverages optical flow to improve the performance of dynamic 3DGS by incorporating motion cues into the modeling process. The key contributions include establishing a correspondence between 3D Gaussian movements and pixel-level flows, introducing a flow augmentation method with uncertainty-aware loss, and proposing a transient-aware deformation auxiliary module for deformation-based 3DGS. The framework is validated through extensive experiments on multi-view and monocular scenes, demonstrating significant improvements in rendering quality and efficiency compared to existing methods. The method effectively addresses the challenges of dynamic scene reconstruction, including motion ambiguities and model redundancy, by utilizing motion information from optical flow. The framework is shown to be effective for both iterative and deformation-based paradigms of dynamic 3DGS, achieving state-of-the-art results in dynamic scene reconstruction.