Per-Gaussian Embedding-Based Deformation for Deformable 3D Gaussian Splatting

Per-Gaussian Embedding-Based Deformation for Deformable 3D Gaussian Splatting

26 Jul 2024 | Jeongmin Bae, Seoha Kim, Youngsik Yun, Hahyun Lee, Gun Bang, and Youngjung Uh
This paper proposes a per-Gaussian deformation method for deformable 3D Gaussian Splatting (3DGS), which improves the reconstruction of dynamic scenes by using per-Gaussian embeddings and temporal embeddings. The method defines deformation as a function of per-Gaussian embeddings and temporal embeddings, and decomposes deformations into coarse and fine components to model slow and fast movements. Additionally, a local smoothness regularization is introduced to ensure similar deformations for neighboring Gaussians. The method outperforms existing approaches in capturing fine details in dynamic regions and achieves fast rendering speed and relatively low capacity. The experiments show that the proposed method achieves superior reconstruction quality, higher FPS, and smaller model size compared to baselines. The method is evaluated on multiple datasets, including the Neural 3D Video dataset, Technicolor Light Field dataset, and HyperNeRF dataset, demonstrating its effectiveness in capturing dynamic scenes. The method is also tested under challenging camera settings, showing robustness in reconstructing fast-moving parts. The results indicate that the proposed method is effective in capturing dynamic scenes with high quality and efficiency.This paper proposes a per-Gaussian deformation method for deformable 3D Gaussian Splatting (3DGS), which improves the reconstruction of dynamic scenes by using per-Gaussian embeddings and temporal embeddings. The method defines deformation as a function of per-Gaussian embeddings and temporal embeddings, and decomposes deformations into coarse and fine components to model slow and fast movements. Additionally, a local smoothness regularization is introduced to ensure similar deformations for neighboring Gaussians. The method outperforms existing approaches in capturing fine details in dynamic regions and achieves fast rendering speed and relatively low capacity. The experiments show that the proposed method achieves superior reconstruction quality, higher FPS, and smaller model size compared to baselines. The method is evaluated on multiple datasets, including the Neural 3D Video dataset, Technicolor Light Field dataset, and HyperNeRF dataset, demonstrating its effectiveness in capturing dynamic scenes. The method is also tested under challenging camera settings, showing robustness in reconstructing fast-moving parts. The results indicate that the proposed method is effective in capturing dynamic scenes with high quality and efficiency.
Reach us at info@study.space
[slides] Per-Gaussian Embedding-Based Deformation for Deformable 3D Gaussian Splatting | StudySpace