BootsTAP: Bootstrapped Training for Tracking-Any-Point

BootsTAP: Bootstrapped Training for Tracking-Any-Point

23 May 2024 | Carl Doersch, Pauline Luc, Yi Yang, Dilara Gokay, Skanda Koppula, Ankush Gupta, Joseph Heyward, Ignacio Rocco, Ross Goroshin, João Carreira, and Andrew Zisserman
The paper "BootsTAP: Bootstrapped Training for Tracking-Any-Point" addresses the challenge of improving point tracking performance using large-scale, unlabeled real-world data. The authors propose a self-supervised learning approach that leverages the properties of real trajectories, such as equivariance to spatial transformations and invariance to non-spatial corruptions. By training a "teacher" model on synthetic data and using it to generate pseudo-ground-truth labels for a "student" model, the method bootstraps the student model on real-world videos. This approach significantly enhances the student model's performance on point tracking benchmarks, outperforming previous methods by a wide margin. The paper also includes ablation studies to validate the effectiveness of the proposed method and discusses the limitations and future directions. The authors release their model and checkpoints on GitHub for community use.The paper "BootsTAP: Bootstrapped Training for Tracking-Any-Point" addresses the challenge of improving point tracking performance using large-scale, unlabeled real-world data. The authors propose a self-supervised learning approach that leverages the properties of real trajectories, such as equivariance to spatial transformations and invariance to non-spatial corruptions. By training a "teacher" model on synthetic data and using it to generate pseudo-ground-truth labels for a "student" model, the method bootstraps the student model on real-world videos. This approach significantly enhances the student model's performance on point tracking benchmarks, outperforming previous methods by a wide margin. The paper also includes ablation studies to validate the effectiveness of the proposed method and discusses the limitations and future directions. The authors release their model and checkpoints on GitHub for community use.
Reach us at info@study.space