14 Jul 2020 | Tixiao Shan, Brendan Englot, Drew Meyers, Wei Wang, Carlo Ratti, and Daniela Rus
The paper introduces LIO-SAM, a framework for tightly-coupled lidar inertial odometry via smoothing and mapping, which aims to achieve highly accurate, real-time mobile robot trajectory estimation and map-building. LIO-SAM formulates lidar-inertial odometry using a factor graph, allowing for the incorporation of various relative and absolute measurements, including loop closures, from different sources. The system uses IMU preintegration to de-skew point clouds and provide an initial guess for lidar odometry optimization, which is then used to estimate the IMU bias. To improve real-time performance, the system marginalizes old lidar scans for pose optimization and employs a local sliding window approach for scan-matching, selectively adding keyframes and registering them to a fixed-size set of prior sub-keyframes. The proposed method is evaluated on datasets from three platforms across various scales and environments, demonstrating superior performance compared to existing methods like LOAM and LIOM. Key contributions include the integration of multi-sensor fusion, efficient local scan-matching, and the ability to handle long-duration navigation and feature-poor environments.The paper introduces LIO-SAM, a framework for tightly-coupled lidar inertial odometry via smoothing and mapping, which aims to achieve highly accurate, real-time mobile robot trajectory estimation and map-building. LIO-SAM formulates lidar-inertial odometry using a factor graph, allowing for the incorporation of various relative and absolute measurements, including loop closures, from different sources. The system uses IMU preintegration to de-skew point clouds and provide an initial guess for lidar odometry optimization, which is then used to estimate the IMU bias. To improve real-time performance, the system marginalizes old lidar scans for pose optimization and employs a local sliding window approach for scan-matching, selectively adding keyframes and registering them to a fixed-size set of prior sub-keyframes. The proposed method is evaluated on datasets from three platforms across various scales and environments, demonstrating superior performance compared to existing methods like LOAM and LIOM. Key contributions include the integration of multi-sensor fusion, efficient local scan-matching, and the ability to handle long-duration navigation and feature-poor environments.