Causality-Inspired Invariant Representation Learning for Text-Based Person Retrieval

Causality-Inspired Invariant Representation Learning for Text-Based Person Retrieval

2024 | Yu Liu, Guihe Qin, Haipeng Chen, Zhiyong Cheng, Xun Yang
The paper "Causality-Inspired Invariant Representation Learning for Text-Based Person Retrieval" addresses the challenge of text-based person retrieval (TPR), which aims to retrieve relevant images of specific pedestrians based on textual queries. Traditional approaches often rely on pre-trained deep neural networks to learn cross-modal correlations, but they struggle with environmental changes and similar attribute images due to their focus on statistical correlations rather than causal ones. The authors propose a novel method called Invariant Representation Learning for TPR (IRLT), which leverages a causal view to extract robust visual representations that are independent of non-causal factors and sufficient for reliable retrieval. IRLT introduces a style intervener and a scene simulator to enforce these properties, enhancing the model's ability to handle variations in illumination, pose, and occlusion. Extensive experiments on three datasets demonstrate the effectiveness and generalization of IRLT, showing significant improvements over existing baselines in accuracy and robustness. The method is model-agnostic and can be integrated with various TPR frameworks, making it a promising approach for improving the reliability of text-based person retrieval.The paper "Causality-Inspired Invariant Representation Learning for Text-Based Person Retrieval" addresses the challenge of text-based person retrieval (TPR), which aims to retrieve relevant images of specific pedestrians based on textual queries. Traditional approaches often rely on pre-trained deep neural networks to learn cross-modal correlations, but they struggle with environmental changes and similar attribute images due to their focus on statistical correlations rather than causal ones. The authors propose a novel method called Invariant Representation Learning for TPR (IRLT), which leverages a causal view to extract robust visual representations that are independent of non-causal factors and sufficient for reliable retrieval. IRLT introduces a style intervener and a scene simulator to enforce these properties, enhancing the model's ability to handle variations in illumination, pose, and occlusion. Extensive experiments on three datasets demonstrate the effectiveness and generalization of IRLT, showing significant improvements over existing baselines in accuracy and robustness. The method is model-agnostic and can be integrated with various TPR frameworks, making it a promising approach for improving the reliability of text-based person retrieval.
Reach us at info@study.space