This paper proposes an unsupervised framework for person re-identification (ReID) based on salience learning. The key contributions include: (1) an unsupervised framework for extracting distinctive features without requiring identity labels; (2) adjacency-constrained patch matching to handle viewpoint and pose variations; and (3) unsupervised learning of human salience for discriminative and reliable patch matching. The approach is validated on the VIPeR and ETHZ datasets.
Person ReID involves matching and ranking pedestrians across non-overlapping camera views. Existing methods often require labeled data and struggle with viewpoint, pose, and illumination changes. This paper introduces an unsupervised approach that learns human salience without identity labels, improving performance by incorporating salient features in patch matching.
The method uses dense correspondence to align images and patch matching with adjacency constraints to handle misalignment. Human salience is learned in an unsupervised manner, focusing on distinctive features that are discriminative and reliable across different views. The approach is evaluated on the VIPeR and ETHZ datasets, showing significant improvements in matching accuracy compared to existing methods.
The paper introduces two salience detection methods: K-Nearest Neighbor (KNN) and One-Class SVM (OCSVM). These methods are used to identify salient patches that are unique and reliable for matching. The results show that the proposed approach outperforms existing methods in terms of matching accuracy, particularly in handling viewpoint and pose variations.
The approach is combined with existing methods to enhance performance. The results demonstrate that the proposed method significantly improves the accuracy of person re-identification, especially in challenging conditions such as viewpoint changes and occlusions. The experiments show that the method achieves high matching rates, outperforming other approaches on both the VIPeR and ETHZ datasets.This paper proposes an unsupervised framework for person re-identification (ReID) based on salience learning. The key contributions include: (1) an unsupervised framework for extracting distinctive features without requiring identity labels; (2) adjacency-constrained patch matching to handle viewpoint and pose variations; and (3) unsupervised learning of human salience for discriminative and reliable patch matching. The approach is validated on the VIPeR and ETHZ datasets.
Person ReID involves matching and ranking pedestrians across non-overlapping camera views. Existing methods often require labeled data and struggle with viewpoint, pose, and illumination changes. This paper introduces an unsupervised approach that learns human salience without identity labels, improving performance by incorporating salient features in patch matching.
The method uses dense correspondence to align images and patch matching with adjacency constraints to handle misalignment. Human salience is learned in an unsupervised manner, focusing on distinctive features that are discriminative and reliable across different views. The approach is evaluated on the VIPeR and ETHZ datasets, showing significant improvements in matching accuracy compared to existing methods.
The paper introduces two salience detection methods: K-Nearest Neighbor (KNN) and One-Class SVM (OCSVM). These methods are used to identify salient patches that are unique and reliable for matching. The results show that the proposed approach outperforms existing methods in terms of matching accuracy, particularly in handling viewpoint and pose variations.
The approach is combined with existing methods to enhance performance. The results demonstrate that the proposed method significantly improves the accuracy of person re-identification, especially in challenging conditions such as viewpoint changes and occlusions. The experiments show that the method achieves high matching rates, outperforming other approaches on both the VIPeR and ETHZ datasets.