12 Jun 2024 | Weirong Chen, Le Chen, Rui Wang, Marc Pollefeys
LEAP-VO is a novel visual odometry system that addresses the limitations of existing methods by incorporating long-term point tracking and temporal probabilistic modeling. The system, named Long-term Effective Any Point Tracking (LEAP), combines visual, inter-track, and temporal cues with carefully selected anchors to estimate dynamic track motion. LEAP's temporal probabilistic formulation integrates distribution updates into a learnable iterative refinement module to reason about point-wise uncertainty. Based on these features, LEAP-VO is a robust visual odometry system that effectively handles occlusions and dynamic scenes. The system's front-end employs long-term point tracking to provide a novel practice in visual odometry. Extensive experiments demonstrate that the proposed pipeline significantly outperforms existing baselines across various visual odometry benchmarks. LEAP-VO leverages anchor-based dynamic track estimation and temporal probabilistic formulation to capture global motion patterns and uncertainties of the tracks. The system is evaluated on multiple datasets, including Replica, MPI Sintel, and TartanAir-Shibuya, showing superior performance in terms of accuracy and robustness. The results highlight the effectiveness of LEAP-VO in handling dynamic scenes and occlusions, demonstrating its potential for real-world applications.LEAP-VO is a novel visual odometry system that addresses the limitations of existing methods by incorporating long-term point tracking and temporal probabilistic modeling. The system, named Long-term Effective Any Point Tracking (LEAP), combines visual, inter-track, and temporal cues with carefully selected anchors to estimate dynamic track motion. LEAP's temporal probabilistic formulation integrates distribution updates into a learnable iterative refinement module to reason about point-wise uncertainty. Based on these features, LEAP-VO is a robust visual odometry system that effectively handles occlusions and dynamic scenes. The system's front-end employs long-term point tracking to provide a novel practice in visual odometry. Extensive experiments demonstrate that the proposed pipeline significantly outperforms existing baselines across various visual odometry benchmarks. LEAP-VO leverages anchor-based dynamic track estimation and temporal probabilistic formulation to capture global motion patterns and uncertainties of the tracks. The system is evaluated on multiple datasets, including Replica, MPI Sintel, and TartanAir-Shibuya, showing superior performance in terms of accuracy and robustness. The results highlight the effectiveness of LEAP-VO in handling dynamic scenes and occlusions, demonstrating its potential for real-world applications.