The paper "PromptAD: Learning Prompts with only Normal Samples for Few-Shot Anomaly Detection" addresses the challenge of one-class anomaly detection using few-shot learning. The authors propose a novel method called PromptAD, which automatically learns prompts to guide anomaly detection. The key contributions of the paper are:
1. **Semantic Concatenation (SC)**: This technique transposes normal prompts into anomaly prompts by concatenating normal prompts with anomaly suffixes, creating a large number of negative samples for prompt learning in the one-class setting.
2. **Explicit Anomaly Margin (EAM)**: This concept introduces a hyper-parameter to control the margin between normal and anomaly prompt features, ensuring a sufficient distance between the two sets of features.
3. **Methodology**: PromptAD is built on a modified version of CLIP (VV-CLIP) and uses semantic concatenation and EAM to enhance prompt learning. The method also includes vision-guided anomaly detection (VAD) to improve pixel-level performance.
4. **Experiments**: PromptAD achieves state-of-the-art results in 11 out of 12 few-shot settings on the MVtec and VisA datasets, demonstrating its effectiveness in image-level and pixel-level anomaly detection.
5. **Ablation Study**: The paper conducts ablation experiments to validate the impact of SC and EAM on the overall performance of PromptAD.
6. **Hyper-parameter Analysis**: The effect of different hyper-parameters (e.g., $N$, $L$, and $\lambda$) on PromptAD's performance is analyzed.
7. **Visualization Results**: Visualizations of visual and textual features after normalization show clear discrimination between normal and anomaly prompt features, supporting the effectiveness of PromptAD.
The paper concludes by highlighting the practical applications of PromptAD in industrial settings, where rapid model training with few samples is crucial.The paper "PromptAD: Learning Prompts with only Normal Samples for Few-Shot Anomaly Detection" addresses the challenge of one-class anomaly detection using few-shot learning. The authors propose a novel method called PromptAD, which automatically learns prompts to guide anomaly detection. The key contributions of the paper are:
1. **Semantic Concatenation (SC)**: This technique transposes normal prompts into anomaly prompts by concatenating normal prompts with anomaly suffixes, creating a large number of negative samples for prompt learning in the one-class setting.
2. **Explicit Anomaly Margin (EAM)**: This concept introduces a hyper-parameter to control the margin between normal and anomaly prompt features, ensuring a sufficient distance between the two sets of features.
3. **Methodology**: PromptAD is built on a modified version of CLIP (VV-CLIP) and uses semantic concatenation and EAM to enhance prompt learning. The method also includes vision-guided anomaly detection (VAD) to improve pixel-level performance.
4. **Experiments**: PromptAD achieves state-of-the-art results in 11 out of 12 few-shot settings on the MVtec and VisA datasets, demonstrating its effectiveness in image-level and pixel-level anomaly detection.
5. **Ablation Study**: The paper conducts ablation experiments to validate the impact of SC and EAM on the overall performance of PromptAD.
6. **Hyper-parameter Analysis**: The effect of different hyper-parameters (e.g., $N$, $L$, and $\lambda$) on PromptAD's performance is analyzed.
7. **Visualization Results**: Visualizations of visual and textual features after normalization show clear discrimination between normal and anomaly prompt features, supporting the effectiveness of PromptAD.
The paper concludes by highlighting the practical applications of PromptAD in industrial settings, where rapid model training with few samples is crucial.