Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification

Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification

15 May 2018 | Weijian Deng†, Liang Zheng†§, Qixiang Ye†, Guoliang Kang‡, Yi Yang‡, Jianbin Jiao†*
This paper presents a "learning via translation" framework for person re-identification (re-ID) with preserved self-similarity and domain-dissimilarity. The goal is to train re-ID models on one domain that generalize well to another. The proposed method, called Similarity Preserving cycle-consistent Generative Adversarial Network (SPGAN), combines an Siamese network (SiaNet) and a CycleGAN to preserve the underlying ID information during image-image translation. The framework first translates labeled images from the source domain to the target domain in an unsupervised manner, then trains re-ID models using the translated images with supervised methods. The key contributions are the introduction of SPGAN, which preserves self-similarity (the same ID in the source and translated image) and domain-dissimilarity (different ID in the translated image and any target image). The SPGAN uses a contrastive loss to ensure that the translated image is similar to its source counterpart and dissimilar to any target image. The method is evaluated on two large-scale re-ID datasets, Market-1501 and DukeMTMC-reID, showing improved performance compared to existing methods. The results demonstrate that SPGAN generates more suitable images for domain adaptation and achieves consistent and competitive re-ID accuracy. The framework also incorporates local max pooling (LMP) to further improve re-ID performance by reducing the impact of noisy signals from translated images. The experiments show that SPGAN outperforms other methods in terms of re-ID accuracy, particularly when applied to the Market-1501 dataset. The method is effective in preserving the underlying ID information during image translation, which is crucial for re-ID tasks.This paper presents a "learning via translation" framework for person re-identification (re-ID) with preserved self-similarity and domain-dissimilarity. The goal is to train re-ID models on one domain that generalize well to another. The proposed method, called Similarity Preserving cycle-consistent Generative Adversarial Network (SPGAN), combines an Siamese network (SiaNet) and a CycleGAN to preserve the underlying ID information during image-image translation. The framework first translates labeled images from the source domain to the target domain in an unsupervised manner, then trains re-ID models using the translated images with supervised methods. The key contributions are the introduction of SPGAN, which preserves self-similarity (the same ID in the source and translated image) and domain-dissimilarity (different ID in the translated image and any target image). The SPGAN uses a contrastive loss to ensure that the translated image is similar to its source counterpart and dissimilar to any target image. The method is evaluated on two large-scale re-ID datasets, Market-1501 and DukeMTMC-reID, showing improved performance compared to existing methods. The results demonstrate that SPGAN generates more suitable images for domain adaptation and achieves consistent and competitive re-ID accuracy. The framework also incorporates local max pooling (LMP) to further improve re-ID performance by reducing the impact of noisy signals from translated images. The experiments show that SPGAN outperforms other methods in terms of re-ID accuracy, particularly when applied to the Market-1501 dataset. The method is effective in preserving the underlying ID information during image translation, which is crucial for re-ID tasks.
Reach us at info@study.space
[slides and audio] Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification