LIV-GaussMap: LiDAR-Inertial-Visual Fusion for Real-time 3D Radiance Field Map Rendering

LIV-GaussMap: LiDAR-Inertial-Visual Fusion for Real-time 3D Radiance Field Map Rendering

April 15, 2024 | Sheng Hong, Junjie He, Xinhua Zheng, Chunran Zheng, Shaojie Shen
LIV-GaussMap is a novel system that integrates LiDAR, inertial, and visual sensors to create a precise and photorealistic 3D radiance field map. The system uses differentiable Gaussians to enhance mapping accuracy, quality, and structural fidelity. It leverages the complementary strengths of LiDAR and visual data to capture large-scale 3D scenes and restore their visual details with high fidelity. The system initializes scene surface Gaussians and sensor poses using a LiDAR-inertial system with size-adaptive voxels. Visual-derived photometric gradients are then used to optimize and refine the Gaussians, improving their quality and density. The system supports various LiDAR types, including solid-state and mechanical, and enables real-time generation of photorealistic renderings across diverse LIV datasets. It demonstrates resilience and versatility in generating real-time photorealistic scenes for digital twins and virtual reality, as well as potential applications in real-time SLAM and robotics. The software and hardware, along with self-collected datasets, are available on GitHub. The system outperforms existing methods in terms of PSNR and rendering speed, particularly in extrapolation tasks. It also shows improved structural accuracy compared to purely visual approaches. The system has been tested on various public and proprietary datasets, demonstrating its effectiveness in both indoor and outdoor environments. The method uses LiDAR for initial structure optimization and visual data for further refinement, achieving high-quality and real-time rendering. The system's approach is validated through extensive experiments, showing superior performance in both interpolation and extrapolation. The method is efficient and effective, with a focus on real-time rendering and accurate mapping.LIV-GaussMap is a novel system that integrates LiDAR, inertial, and visual sensors to create a precise and photorealistic 3D radiance field map. The system uses differentiable Gaussians to enhance mapping accuracy, quality, and structural fidelity. It leverages the complementary strengths of LiDAR and visual data to capture large-scale 3D scenes and restore their visual details with high fidelity. The system initializes scene surface Gaussians and sensor poses using a LiDAR-inertial system with size-adaptive voxels. Visual-derived photometric gradients are then used to optimize and refine the Gaussians, improving their quality and density. The system supports various LiDAR types, including solid-state and mechanical, and enables real-time generation of photorealistic renderings across diverse LIV datasets. It demonstrates resilience and versatility in generating real-time photorealistic scenes for digital twins and virtual reality, as well as potential applications in real-time SLAM and robotics. The software and hardware, along with self-collected datasets, are available on GitHub. The system outperforms existing methods in terms of PSNR and rendering speed, particularly in extrapolation tasks. It also shows improved structural accuracy compared to purely visual approaches. The system has been tested on various public and proprietary datasets, demonstrating its effectiveness in both indoor and outdoor environments. The method uses LiDAR for initial structure optimization and visual data for further refinement, achieving high-quality and real-time rendering. The system's approach is validated through extensive experiments, showing superior performance in both interpolation and extrapolation. The method is efficient and effective, with a focus on real-time rendering and accurate mapping.
Reach us at info@study.space