Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection

Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection

17 Jan 2024 | Yuanpeng Tu, Boshen Zhang, Liang Liu, Yuxi Li, Xuhai Chen, Jiangning Zhang, Yabiao Wang, Chengjie Wang, Cai Rong Zhao
This paper proposes LSFA, a self-supervised multi-modal feature adaptation framework for 3D industrial anomaly detection. The method addresses the challenges of domain gap between pre-trained models and industrial data, and aims to enhance the adaptability of pre-trained features for anomaly detection. LSFA performs adaptation from two views: intra-modal and cross-modal. Intra-modal adaptation optimizes feature compactness through dynamic memory banks, while cross-modal alignment ensures consistency between modalities at both patch and object levels. The framework is evaluated on two benchmark datasets, MVTec-3D AD and Eyecandies, achieving significant performance improvements. LSFA outperforms previous state-of-the-art methods, achieving 97.1% I-AUROC on MVTec-3D AD. The method is effective in capturing subtle anomalies and avoiding false positives. The framework is also tested in few-shot settings and compared with fine-tuning methods, demonstrating its robustness and effectiveness in multi-modal anomaly detection.This paper proposes LSFA, a self-supervised multi-modal feature adaptation framework for 3D industrial anomaly detection. The method addresses the challenges of domain gap between pre-trained models and industrial data, and aims to enhance the adaptability of pre-trained features for anomaly detection. LSFA performs adaptation from two views: intra-modal and cross-modal. Intra-modal adaptation optimizes feature compactness through dynamic memory banks, while cross-modal alignment ensures consistency between modalities at both patch and object levels. The framework is evaluated on two benchmark datasets, MVTec-3D AD and Eyecandies, achieving significant performance improvements. LSFA outperforms previous state-of-the-art methods, achieving 97.1% I-AUROC on MVTec-3D AD. The method is effective in capturing subtle anomalies and avoiding false positives. The framework is also tested in few-shot settings and compared with fine-tuning methods, demonstrating its robustness and effectiveness in multi-modal anomaly detection.
Reach us at info@study.space