LaSOT: A High-quality Benchmark for Large-scale Single Object Tracking

LaSOT: A High-quality Benchmark for Large-scale Single Object Tracking

27 Mar 2019 | Heng Fan1*, Liting Lin2*, Fan Yang1*, Peng Chu1*, Ge Deng1*, Sijia Yu1*, Hexin Bai1, Yong Xu2, Chunyuan Liao3, Haibin Ling1†
LaSOT is a high-quality benchmark for large-scale single object tracking, consisting of 1,400 sequences with over 3.5 million frames. Each frame is manually annotated with a bounding box, making it the largest densely annotated tracking benchmark to date. The average video length is more than 2,500 frames, and the sequences include various challenges such as target disappearance and re-appearing. LaSOT aims to provide a dedicated platform for training deep trackers and evaluating long-term tracking performance. It also includes natural language specifications for each sequence to encourage the integration of visual and linguistic features. The benchmark is evaluated using two protocols: one using all 1,400 sequences and another splitting the dataset into training and testing subsets. Evaluations on 35 tracking algorithms show significant room for improvement, highlighting the need for more robust and accurate tracking methods.LaSOT is a high-quality benchmark for large-scale single object tracking, consisting of 1,400 sequences with over 3.5 million frames. Each frame is manually annotated with a bounding box, making it the largest densely annotated tracking benchmark to date. The average video length is more than 2,500 frames, and the sequences include various challenges such as target disappearance and re-appearing. LaSOT aims to provide a dedicated platform for training deep trackers and evaluating long-term tracking performance. It also includes natural language specifications for each sequence to encourage the integration of visual and linguistic features. The benchmark is evaluated using two protocols: one using all 1,400 sequences and another splitting the dataset into training and testing subsets. Evaluations on 35 tracking algorithms show significant room for improvement, highlighting the need for more robust and accurate tracking methods.
Reach us at info@study.space