3DGS-ReLoc: 3D Gaussian Splatting for Map Representation and Visual ReLocalization

3DGS-ReLoc: 3D Gaussian Splatting for Map Representation and Visual ReLocalization

17 Mar 2024 | Peng Jiang, Gaurav Pandey, Srikanth Saripalli
This paper introduces 3DGS-ReLoc, a novel system for 3D mapping and visual relocalization using 3D Gaussian Splatting. The system leverages LiDAR and camera data to create accurate and visually plausible environmental representations. By using LiDAR data to initialize the training of the 3D Gaussian Splatting map, the system generates detailed and geometrically accurate maps. To reduce GPU memory usage and enable efficient spatial queries, a combination of a 2D voxel map and a KD-tree is employed. This setup supports efficient visual relocalization by identifying correspondences between the query image and the rendered image from the Gaussian Splatting map via normalized cross-correlation (NCC). Additionally, the camera pose of the query image is refined using feature-based matching and the Perspective-n-Point (PnP) technique. The system's effectiveness, adaptability, and precision are demonstrated through extensive evaluation on the KITTI360 dataset. The paper discusses the challenges of integrating LiDAR and camera data for scene reconstruction and the importance of accurate scene representation for autonomous navigation. It introduces 3DGS-ReLoc as a system tailored for visual relocalization in autonomous navigation, using 3D Gaussian Splatting as its primary map representation technique. The system utilizes LiDAR data to initiate the training of the 3D Gaussian Splatting representation, enabling the generation of large-scale, geometry-accurate maps. This initial training with LiDAR significantly improves the system's ability to create detailed and precise environmental models, essential for advanced perception systems in autonomous vehicles. To address the high GPU memory consumption challenge, the system divides 3D Gaussian Splatting maps into 2D voxels and uses a KD-tree for efficient spatial querying. The paper also explores the application of 3D Gaussian Splatting in SLAM, highlighting its potential for efficient and accurate scene representation. It discusses the limitations of traditional map representations and the benefits of 3D Gaussian Splatting in terms of memory efficiency and visual fidelity. The system's performance is evaluated on the KITTI360 dataset, demonstrating its effectiveness, versatility, and precision in visual relocalization tasks. The paper concludes with a discussion on the trade-offs between visual quality, memory usage, and geometric fidelity in map representation, and the potential for fully differentiable localization pipelines using 3D Gaussian Splatting.This paper introduces 3DGS-ReLoc, a novel system for 3D mapping and visual relocalization using 3D Gaussian Splatting. The system leverages LiDAR and camera data to create accurate and visually plausible environmental representations. By using LiDAR data to initialize the training of the 3D Gaussian Splatting map, the system generates detailed and geometrically accurate maps. To reduce GPU memory usage and enable efficient spatial queries, a combination of a 2D voxel map and a KD-tree is employed. This setup supports efficient visual relocalization by identifying correspondences between the query image and the rendered image from the Gaussian Splatting map via normalized cross-correlation (NCC). Additionally, the camera pose of the query image is refined using feature-based matching and the Perspective-n-Point (PnP) technique. The system's effectiveness, adaptability, and precision are demonstrated through extensive evaluation on the KITTI360 dataset. The paper discusses the challenges of integrating LiDAR and camera data for scene reconstruction and the importance of accurate scene representation for autonomous navigation. It introduces 3DGS-ReLoc as a system tailored for visual relocalization in autonomous navigation, using 3D Gaussian Splatting as its primary map representation technique. The system utilizes LiDAR data to initiate the training of the 3D Gaussian Splatting representation, enabling the generation of large-scale, geometry-accurate maps. This initial training with LiDAR significantly improves the system's ability to create detailed and precise environmental models, essential for advanced perception systems in autonomous vehicles. To address the high GPU memory consumption challenge, the system divides 3D Gaussian Splatting maps into 2D voxels and uses a KD-tree for efficient spatial querying. The paper also explores the application of 3D Gaussian Splatting in SLAM, highlighting its potential for efficient and accurate scene representation. It discusses the limitations of traditional map representations and the benefits of 3D Gaussian Splatting in terms of memory efficiency and visual fidelity. The system's performance is evaluated on the KITTI360 dataset, demonstrating its effectiveness, versatility, and precision in visual relocalization tasks. The paper concludes with a discussion on the trade-offs between visual quality, memory usage, and geometric fidelity in map representation, and the potential for fully differentiable localization pipelines using 3D Gaussian Splatting.
Reach us at info@study.space