| Christian Kerl, Jürgen Sturm, and Daniel Cremers
This paper introduces a dense visual SLAM method for RGB-D cameras that minimizes both photometric and depth errors across all pixels, enhancing pose accuracy compared to sparse, feature-based methods. The authors propose an entropy-based similarity measure for keyframe selection and loop closure detection, which helps reduce drift. The method builds a graph from successful matches and optimizes it using the g2o framework. Extensive evaluations on publicly available datasets show that the approach performs well in low-texture and low-structure scenes, outperforming several state-of-the-art methods in terms of trajectory error. The software is released as open-source. The main contributions include a fast frame-to-frame registration method, an entropy-based keyframe selection method, a method for validating loop closures, and the integration of these techniques into a general graph SLAM solver. The paper also discusses related work and provides a detailed description of the dense visual odometry and keyframe-based visual SLAM components.This paper introduces a dense visual SLAM method for RGB-D cameras that minimizes both photometric and depth errors across all pixels, enhancing pose accuracy compared to sparse, feature-based methods. The authors propose an entropy-based similarity measure for keyframe selection and loop closure detection, which helps reduce drift. The method builds a graph from successful matches and optimizes it using the g2o framework. Extensive evaluations on publicly available datasets show that the approach performs well in low-texture and low-structure scenes, outperforming several state-of-the-art methods in terms of trajectory error. The software is released as open-source. The main contributions include a fast frame-to-frame registration method, an entropy-based keyframe selection method, a method for validating loop closures, and the integration of these techniques into a general graph SLAM solver. The paper also discusses related work and provides a detailed description of the dense visual odometry and keyframe-based visual SLAM components.