Harmonious Attention Network for Person Re-Identification

Harmonious Attention Network for Person Re-Identification

22 Feb 2018 | Wei Li, Xiatian Zhu, Shaogang Gong
This paper introduces the Harmonious Attention Convolutional Neural Network (HA-CNN) for person re-identification (re-id). The HA-CNN jointly learns soft pixel attention and hard regional attention, along with optimizing feature representations, to enhance re-id performance in uncontrolled and misaligned images. The model is designed to address the limitations of existing methods, which often rely on well-aligned images or constrained attention selection mechanisms. The HA-CNN architecture is lightweight and efficient, making it suitable for large-scale benchmarks with limited training data. Extensive evaluations on three datasets (CUHK03, Market-1501, and DukeMTMC-ReID) demonstrate the superior performance of the HA-CNN compared to state-of-the-art methods. The paper also includes detailed analyses of the model's components and its effectiveness in handling various types of attention and cross-attention interaction.This paper introduces the Harmonious Attention Convolutional Neural Network (HA-CNN) for person re-identification (re-id). The HA-CNN jointly learns soft pixel attention and hard regional attention, along with optimizing feature representations, to enhance re-id performance in uncontrolled and misaligned images. The model is designed to address the limitations of existing methods, which often rely on well-aligned images or constrained attention selection mechanisms. The HA-CNN architecture is lightweight and efficient, making it suitable for large-scale benchmarks with limited training data. Extensive evaluations on three datasets (CUHK03, Market-1501, and DukeMTMC-ReID) demonstrate the superior performance of the HA-CNN compared to state-of-the-art methods. The paper also includes detailed analyses of the model's components and its effectiveness in handling various types of attention and cross-attention interaction.
Reach us at info@study.space