2021 | Carlos Campos*, Richard Elvira*, Juan J. Gómez Rodríguez, José M.M. Montiel and Juan D. Tardós
This paper presents ORB-SLAM3, an open-source library for visual, visual-inertial, and multi-map SLAM systems. The key contributions of ORB-SLAM3 include:
1. **Feature-Based Visual-Inertial SLAM**: A tightly-integrated visual-inertial SLAM system that relies on Maximum-a-Posteriori (MAP) estimation, even during IMU initialization. This results in robust real-time operation in various environments and significantly improved accuracy compared to previous approaches.
2. **Multiple Map System**: A novel place recognition method with improved recall, enabling the system to handle long periods of poor visual information. ORB-SLAM3 can start a new map when lost and seamlessly merge it with previous maps upon revisiting. This allows for the reuse of all previous information in bundle adjustment, enhancing accuracy even for widely separated keyframes.
3. **ORB-SLAM Atlas**: The first complete multi-map SLAM system capable of handling visual and visual-inertial systems in monocular and stereo configurations. It can represent and merge a set of disconnected maps, performing incremental multi-session SLAM.
4. **Abstract Camera Representation**: The system is agnostic to the camera model used, allowing for the addition of new models by providing projection, unprojection, and Jacobian functions. Pin-hole and fisheye models are supported.
The paper also discusses related work, including visual SLAM, visual-inertial SLAM, and multi-map SLAM systems, and provides experimental results demonstrating the robustness and accuracy of ORB-SLAM3. The system is evaluated on the EuRoC drone and TUM-VI datasets, achieving average accuracies of 3.5 cm and 9 mm, respectively, in challenging scenarios. The source code is made publicly available to benefit the community.This paper presents ORB-SLAM3, an open-source library for visual, visual-inertial, and multi-map SLAM systems. The key contributions of ORB-SLAM3 include:
1. **Feature-Based Visual-Inertial SLAM**: A tightly-integrated visual-inertial SLAM system that relies on Maximum-a-Posteriori (MAP) estimation, even during IMU initialization. This results in robust real-time operation in various environments and significantly improved accuracy compared to previous approaches.
2. **Multiple Map System**: A novel place recognition method with improved recall, enabling the system to handle long periods of poor visual information. ORB-SLAM3 can start a new map when lost and seamlessly merge it with previous maps upon revisiting. This allows for the reuse of all previous information in bundle adjustment, enhancing accuracy even for widely separated keyframes.
3. **ORB-SLAM Atlas**: The first complete multi-map SLAM system capable of handling visual and visual-inertial systems in monocular and stereo configurations. It can represent and merge a set of disconnected maps, performing incremental multi-session SLAM.
4. **Abstract Camera Representation**: The system is agnostic to the camera model used, allowing for the addition of new models by providing projection, unprojection, and Jacobian functions. Pin-hole and fisheye models are supported.
The paper also discusses related work, including visual SLAM, visual-inertial SLAM, and multi-map SLAM systems, and provides experimental results demonstrating the robustness and accuracy of ORB-SLAM3. The system is evaluated on the EuRoC drone and TUM-VI datasets, achieving average accuracies of 3.5 cm and 9 mm, respectively, in challenging scenarios. The source code is made publicly available to benefit the community.