Robust Pseudo-label Learning with Neighbor Relation for Unsupervised Visible-Infrared Person Re-Identification

Robust Pseudo-label Learning with Neighbor Relation for Unsupervised Visible-Infrared Person Re-Identification

9 May 2024 | Xiangbo Yin, Jiangming Shi, Yachao Zhang, Yang Lu, Zhizhong Zhang, Yuan Xie, Yanyun Qu
This paper proposes a robust pseudo-label learning framework with neighbor relation (RPNR) for unsupervised visible-infrared person re-identification (USVI-ReID). The main challenges in USVI-ReID are obtaining robust pseudo-labels and establishing reliable cross-modality correspondences. Existing methods often focus on shielding the model from noisy pseudo-labels, neglecting to calibrate them, which compromises model robustness. To address this, RPNR introduces two critical modules: Noisy Pseudo-label Calibration (NPC) and Neighbor Relation Learning (NRL) to correct noisy pseudo-labels and reduce intra-class variations. Additionally, two modules, Optimal Transport Prototype Matching (OTPM) and Memory Hybrid Learning (MHL), are introduced to establish reliable cross-modality correspondences and learn both modality-specific and modality-invariant information. Comprehensive experiments on the SYSU-MM01 and RegDB datasets show that RPNR outperforms the state-of-the-art GUR method with an average Rank-1 improvement of 10.3%. The source codes will be released soon.This paper proposes a robust pseudo-label learning framework with neighbor relation (RPNR) for unsupervised visible-infrared person re-identification (USVI-ReID). The main challenges in USVI-ReID are obtaining robust pseudo-labels and establishing reliable cross-modality correspondences. Existing methods often focus on shielding the model from noisy pseudo-labels, neglecting to calibrate them, which compromises model robustness. To address this, RPNR introduces two critical modules: Noisy Pseudo-label Calibration (NPC) and Neighbor Relation Learning (NRL) to correct noisy pseudo-labels and reduce intra-class variations. Additionally, two modules, Optimal Transport Prototype Matching (OTPM) and Memory Hybrid Learning (MHL), are introduced to establish reliable cross-modality correspondences and learn both modality-specific and modality-invariant information. Comprehensive experiments on the SYSU-MM01 and RegDB datasets show that RPNR outperforms the state-of-the-art GUR method with an average Rank-1 improvement of 10.3%. The source codes will be released soon.
Reach us at info@study.space