5 Jun 2024 | Tobias Fischer, Jonas Kulhanek, Samuel Rota Bulò, Lorenzo Porzi, Marc Pollefeys, Peter Kontschieder
The paper introduces 4DGF, a neural scene representation for dynamic urban areas that combines 3D Gaussians as an efficient geometry scaffold with neural fields for compact and flexible appearance modeling. This approach handles heterogeneous input data, including varying environmental conditions and dynamic objects, and significantly improves rendering speeds. The method uses a scene graph to integrate scene dynamics at a global scale and models articulated motions on a local level via deformations. Experiments demonstrate that 4DGF outperforms state-of-the-art methods in terms of PSNR by over 3 dB and rendering speed by more than 200×, making it suitable for applications like mixed reality and closed-loop simulation.The paper introduces 4DGF, a neural scene representation for dynamic urban areas that combines 3D Gaussians as an efficient geometry scaffold with neural fields for compact and flexible appearance modeling. This approach handles heterogeneous input data, including varying environmental conditions and dynamic objects, and significantly improves rendering speeds. The method uses a scene graph to integrate scene dynamics at a global scale and models articulated motions on a local level via deformations. Experiments demonstrate that 4DGF outperforms state-of-the-art methods in terms of PSNR by over 3 dB and rendering speed by more than 200×, making it suitable for applications like mixed reality and closed-loop simulation.