Direct Sparse Odometry

Direct Sparse Odometry

7 Oct 2016 | Jakob Engel, Vladlen Koltun, Daniel Cremers
This paper proposes a novel direct sparse visual odometry (DSO) method that combines a fully direct probabilistic model with joint optimization of all model parameters, including geometry and camera motion. The method uses a photometric error model and avoids the smoothness prior used in other direct methods, instead sampling pixels evenly across images. It does not rely on keypoint detectors or descriptors, allowing it to sample pixels from all image regions with intensity gradients, including edges and smooth intensity variations on white walls. The method integrates a full photometric calibration, accounting for exposure time, lens vignetting, and non-linear response functions. It is evaluated on three datasets, showing significant improvements over state-of-the-art direct and indirect methods in terms of tracking accuracy and robustness. The DSO method is based on continuous optimization of the photometric error over a window of recent frames, taking into account a photometrically calibrated model for image formation. It jointly optimizes for all involved parameters (camera intrinsic, camera extrinsic, and inverse depth values), effectively performing the photometric equivalent of windowed sparse bundle adjustment. The method uses a sparse geometry representation, where 3D points are represented as inverse depth in a reference frame. The method includes a photometric camera calibration that accounts for non-linear response functions, lens attenuation, and exposure time. It uses a photometric error model that incorporates these factors, allowing for more accurate and robust tracking. The method is evaluated on three datasets, showing that it outperforms other state-of-the-art approaches in terms of robustness and accuracy. It runs in real-time on a laptop and can even run at 5× real-time speed with reduced settings while still outperforming state-of-the-art indirect methods. On high settings, it creates semi-dense models similar in density to those of LSD-SLAM but much more accurate. The method uses a sliding window approach for optimization, where old camera poses and points that leave the field of view are marginalized. It incorporates photometric calibration, including lens attenuation, gamma correction, and known exposure times, which increases accuracy and robustness. The method is implemented in a CPU-based approach and is evaluated on three datasets, showing that it outperforms other methods in terms of accuracy and robustness. The method is also evaluated in terms of the effect of important parameters and new concepts like the use of photometric calibration. The results show that the method is effective in a variety of real-world settings, both in terms of tracking accuracy and robustness.This paper proposes a novel direct sparse visual odometry (DSO) method that combines a fully direct probabilistic model with joint optimization of all model parameters, including geometry and camera motion. The method uses a photometric error model and avoids the smoothness prior used in other direct methods, instead sampling pixels evenly across images. It does not rely on keypoint detectors or descriptors, allowing it to sample pixels from all image regions with intensity gradients, including edges and smooth intensity variations on white walls. The method integrates a full photometric calibration, accounting for exposure time, lens vignetting, and non-linear response functions. It is evaluated on three datasets, showing significant improvements over state-of-the-art direct and indirect methods in terms of tracking accuracy and robustness. The DSO method is based on continuous optimization of the photometric error over a window of recent frames, taking into account a photometrically calibrated model for image formation. It jointly optimizes for all involved parameters (camera intrinsic, camera extrinsic, and inverse depth values), effectively performing the photometric equivalent of windowed sparse bundle adjustment. The method uses a sparse geometry representation, where 3D points are represented as inverse depth in a reference frame. The method includes a photometric camera calibration that accounts for non-linear response functions, lens attenuation, and exposure time. It uses a photometric error model that incorporates these factors, allowing for more accurate and robust tracking. The method is evaluated on three datasets, showing that it outperforms other state-of-the-art approaches in terms of robustness and accuracy. It runs in real-time on a laptop and can even run at 5× real-time speed with reduced settings while still outperforming state-of-the-art indirect methods. On high settings, it creates semi-dense models similar in density to those of LSD-SLAM but much more accurate. The method uses a sliding window approach for optimization, where old camera poses and points that leave the field of view are marginalized. It incorporates photometric calibration, including lens attenuation, gamma correction, and known exposure times, which increases accuracy and robustness. The method is implemented in a CPU-based approach and is evaluated on three datasets, showing that it outperforms other methods in terms of accuracy and robustness. The method is also evaluated in terms of the effect of important parameters and new concepts like the use of photometric calibration. The results show that the method is effective in a variety of real-world settings, both in terms of tracking accuracy and robustness.
Reach us at info@study.space
[slides] Direct Sparse Odometry | StudySpace