3D Geometry-aware Deformable Gaussian Splatting for Dynamic View Synthesis

3D Geometry-aware Deformable Gaussian Splatting for Dynamic View Synthesis

14 Apr 2024 | Zhicheng Lu1*, Xiang Guo1*, Le Hui1†, Tianrui Chen1,2, Min Yang2, Xiao Tang2, Feng Zhu2, Yuchao Dai1†
This paper proposes a 3D geometry-aware deformable Gaussian Splatting method for dynamic view synthesis. Existing neural radiance fields (NeRF) based solutions learn deformation in an implicit manner, which cannot incorporate 3D scene geometry. This leads to unsatisfactory dynamic view synthesis and 3D dynamic reconstruction. The proposed method uses 3D Gaussian Splatting to represent scenes as a collection of 3D Gaussians, each optimized to move and rotate over time to model deformation. To enforce 3D geometry constraints during deformation, the method explicitly extracts 3D geometry features and integrates them in learning the deformation. This enables improved dynamic view synthesis and 3D dynamic reconstruction. The method is evaluated on both synthetic and real datasets, achieving state-of-the-art performance. The method consists of a Gaussian canonical field and a deformation field. The Gaussian canonical field represents the static scene with 3D Gaussians and a geometry-aware feature learning network. The deformation field estimates a transformation for each Gaussian in the canonical field, transferring the Gaussian to the given timestamp. The method uses 3D Gaussian splattering to render images for the given timestamp. The main contributions are: proposing a geometry-aware feature extraction network based on 3D Gaussian distribution, using continuous 6D rotation representation and modified density control strategy to adapt Gaussian splattering to dynamic scenes, and extensive experiments showing the method surpasses competing methods by a wide margin. The method is evaluated on both synthetic and real datasets, achieving state-of-the-art performance. The method is implemented using a 3D Gaussian Splatting approach, with a Gaussian canonical field and a deformation field. The Gaussian canonical field consists of 3D Gaussians and a geometry-aware feature learning network. The deformation field estimates a transformation for each Gaussian in the canonical field, transferring the Gaussian to the given timestamp. The method uses 3D Gaussian splattering to render images for the given timestamp. The method is evaluated on both synthetic and real datasets, achieving state-of-the-art performance.This paper proposes a 3D geometry-aware deformable Gaussian Splatting method for dynamic view synthesis. Existing neural radiance fields (NeRF) based solutions learn deformation in an implicit manner, which cannot incorporate 3D scene geometry. This leads to unsatisfactory dynamic view synthesis and 3D dynamic reconstruction. The proposed method uses 3D Gaussian Splatting to represent scenes as a collection of 3D Gaussians, each optimized to move and rotate over time to model deformation. To enforce 3D geometry constraints during deformation, the method explicitly extracts 3D geometry features and integrates them in learning the deformation. This enables improved dynamic view synthesis and 3D dynamic reconstruction. The method is evaluated on both synthetic and real datasets, achieving state-of-the-art performance. The method consists of a Gaussian canonical field and a deformation field. The Gaussian canonical field represents the static scene with 3D Gaussians and a geometry-aware feature learning network. The deformation field estimates a transformation for each Gaussian in the canonical field, transferring the Gaussian to the given timestamp. The method uses 3D Gaussian splattering to render images for the given timestamp. The main contributions are: proposing a geometry-aware feature extraction network based on 3D Gaussian distribution, using continuous 6D rotation representation and modified density control strategy to adapt Gaussian splattering to dynamic scenes, and extensive experiments showing the method surpasses competing methods by a wide margin. The method is evaluated on both synthetic and real datasets, achieving state-of-the-art performance. The method is implemented using a 3D Gaussian Splatting approach, with a Gaussian canonical field and a deformation field. The Gaussian canonical field consists of 3D Gaussians and a geometry-aware feature learning network. The deformation field estimates a transformation for each Gaussian in the canonical field, transferring the Gaussian to the given timestamp. The method uses 3D Gaussian splattering to render images for the given timestamp. The method is evaluated on both synthetic and real datasets, achieving state-of-the-art performance.
Reach us at info@study.space
[slides] 3D Geometry-aware Deformable Gaussian Splatting for Dynamic View Synthesis | StudySpace