This paper proposes a feature calibration module (FCM) for clustering-based unsupervised object re-identification (ReID). FCM is designed to improve the quality of features before pseudo-label generation, enabling more accurate clustering and representation learning. The module uses a nonparametric graph attention network to calibrate features, encouraging similar instances to cluster together in the feature space while allowing dissimilar instances to separate. This process enhances the reliability of pseudo-labels, leading to better representation learning. FCM is simple, parameter-free, and can be seamlessly integrated into existing methods without affecting training efficiency. Experiments on benchmark datasets such as Market-1501, MSMT17, and DukeMTMC-reID show that FCM consistently improves baseline performance, achieving state-of-the-art results. For example, on MSMT17, FCM improves the mAP by 8.2% compared to the baseline. The method is also effective in various scenarios, including unsupervised visible-infrared person ReID and domain adaptation. The proposed FCM acts as a catalyst, enhancing the interaction between pseudo-label generation and representation learning. The results demonstrate that FCM significantly improves clustering quality and representation learning performance, making it a valuable contribution to the field of unsupervised ReID.This paper proposes a feature calibration module (FCM) for clustering-based unsupervised object re-identification (ReID). FCM is designed to improve the quality of features before pseudo-label generation, enabling more accurate clustering and representation learning. The module uses a nonparametric graph attention network to calibrate features, encouraging similar instances to cluster together in the feature space while allowing dissimilar instances to separate. This process enhances the reliability of pseudo-labels, leading to better representation learning. FCM is simple, parameter-free, and can be seamlessly integrated into existing methods without affecting training efficiency. Experiments on benchmark datasets such as Market-1501, MSMT17, and DukeMTMC-reID show that FCM consistently improves baseline performance, achieving state-of-the-art results. For example, on MSMT17, FCM improves the mAP by 8.2% compared to the baseline. The method is also effective in various scenarios, including unsupervised visible-infrared person ReID and domain adaptation. The proposed FCM acts as a catalyst, enhancing the interaction between pseudo-label generation and representation learning. The results demonstrate that FCM significantly improves clustering quality and representation learning performance, making it a valuable contribution to the field of unsupervised ReID.