This paper proposes a robust object tracking algorithm based on a collaborative model that combines a sparsity-based discriminative classifier (SD-C) and a sparsity-based generative model (SGM). The main challenge in object tracking is handling drastic appearance changes, which is addressed by using both holistic templates and local representations. The SD-C module computes confidence values that assign more weight to foreground regions, while the SGM module uses a histogram-based method that considers spatial information of patches and handles occlusions. The update scheme incorporates both the latest observations and the original template to effectively handle appearance changes and reduce drift. Experiments on various challenging videos show that the proposed tracker outperforms several state-of-the-art algorithms.
The algorithm uses intensity values for representation due to their simplicity and efficiency. It integrates a discriminative classifier based on holistic templates and a generative model using local representations. The appearance model is adaptively updated to account for variations and reduce drifts. The tracker is robust to occlusions, motion blur, rotation, illumination changes, and complex backgrounds. The collaborative model combines the strengths of both generative and discriminative approaches, leading to a more flexible and robust likelihood function for particle filters. The proposed method is evaluated on ten challenging image sequences, demonstrating its effectiveness in handling various tracking challenges. The results show that the tracker performs well in terms of center location error and overlap rate compared to other state-of-the-art methods. The algorithm is robust to occlusions, motion blur, rotation, illumination changes, and complex backgrounds, making it a promising solution for real-world tracking applications.This paper proposes a robust object tracking algorithm based on a collaborative model that combines a sparsity-based discriminative classifier (SD-C) and a sparsity-based generative model (SGM). The main challenge in object tracking is handling drastic appearance changes, which is addressed by using both holistic templates and local representations. The SD-C module computes confidence values that assign more weight to foreground regions, while the SGM module uses a histogram-based method that considers spatial information of patches and handles occlusions. The update scheme incorporates both the latest observations and the original template to effectively handle appearance changes and reduce drift. Experiments on various challenging videos show that the proposed tracker outperforms several state-of-the-art algorithms.
The algorithm uses intensity values for representation due to their simplicity and efficiency. It integrates a discriminative classifier based on holistic templates and a generative model using local representations. The appearance model is adaptively updated to account for variations and reduce drifts. The tracker is robust to occlusions, motion blur, rotation, illumination changes, and complex backgrounds. The collaborative model combines the strengths of both generative and discriminative approaches, leading to a more flexible and robust likelihood function for particle filters. The proposed method is evaluated on ten challenging image sequences, demonstrating its effectiveness in handling various tracking challenges. The results show that the tracker performs well in terms of center location error and overlap rate compared to other state-of-the-art methods. The algorithm is robust to occlusions, motion blur, rotation, illumination changes, and complex backgrounds, making it a promising solution for real-world tracking applications.