Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification

Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification

15 May 2018 | Weijian Deng†, Liang Zheng†§, Qixiang Ye†, Guoliang Kang‡, Yi Yang‡, Jianbin Jiao†*
This paper addresses the challenge of person re-identification (re-ID) in domains with significant differences, where models trained on one domain often fail to generalize well to another. The authors propose a "learning via translation" framework that involves two main steps: unsupervised image-image translation and supervised feature learning. The key innovation is the introduction of a similarity preserving generative adversarial network (SPGAN), which aims to preserve two types of unsupervised similarities: self-similarity (maintaining the ID information of the foreground pedestrian) and domain-dissimilarity (ensuring the translated images are dissimilar to any target images). The SPGAN consists of a Siamese network and a CycleGAN, and it is trained using a contrastive loss function. Experimental results on two large-scale datasets, Market-1501 and DukeMTMC-reID, demonstrate that the proposed method significantly improves re-ID accuracy compared to baseline methods, achieving competitive performance and outperforming state-of-the-art approaches.This paper addresses the challenge of person re-identification (re-ID) in domains with significant differences, where models trained on one domain often fail to generalize well to another. The authors propose a "learning via translation" framework that involves two main steps: unsupervised image-image translation and supervised feature learning. The key innovation is the introduction of a similarity preserving generative adversarial network (SPGAN), which aims to preserve two types of unsupervised similarities: self-similarity (maintaining the ID information of the foreground pedestrian) and domain-dissimilarity (ensuring the translated images are dissimilar to any target images). The SPGAN consists of a Siamese network and a CycleGAN, and it is trained using a contrastive loss function. Experimental results on two large-scale datasets, Market-1501 and DukeMTMC-reID, demonstrate that the proposed method significantly improves re-ID accuracy compared to baseline methods, achieving competitive performance and outperforming state-of-the-art approaches.
Reach us at info@study.space
[slides] Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification | StudySpace