ALFA is a training-free approach for zero-shot visual anomaly detection (VAD) that leverages large vision-language models (LVLMs) and adaptive prompt strategies. The method addresses the challenges of cross-semantic ambiguity and local pixel-level alignment in anomaly detection. ALFA introduces a run-time prompt adaptation strategy to generate informative and adaptive anomaly prompts, which are tailored to each query image using a contextual scoring mechanism. This strategy enhances the model's ability to distinguish between normal and abnormal conditions by dynamically adjusting prompts based on the image content. Additionally, ALFA incorporates a novel fine-grained aligner that projects image-text alignment from global to local semantic spaces, enabling precise anomaly localization. The model achieves significant improvements in performance on the MVTec and VisA datasets, with PRO improvements of 12.1% and 8.9%, respectively, compared to state-of-the-art zero-shot VAD approaches. ALFA's effectiveness is validated through extensive experiments, demonstrating its capability to handle both image-level and pixel-level anomaly detection without requiring additional data or fine-tuning. The model's ability to adapt prompts in real-time and its efficient alignment strategy make it a promising solution for zero-shot VAD tasks.ALFA is a training-free approach for zero-shot visual anomaly detection (VAD) that leverages large vision-language models (LVLMs) and adaptive prompt strategies. The method addresses the challenges of cross-semantic ambiguity and local pixel-level alignment in anomaly detection. ALFA introduces a run-time prompt adaptation strategy to generate informative and adaptive anomaly prompts, which are tailored to each query image using a contextual scoring mechanism. This strategy enhances the model's ability to distinguish between normal and abnormal conditions by dynamically adjusting prompts based on the image content. Additionally, ALFA incorporates a novel fine-grained aligner that projects image-text alignment from global to local semantic spaces, enabling precise anomaly localization. The model achieves significant improvements in performance on the MVTec and VisA datasets, with PRO improvements of 12.1% and 8.9%, respectively, compared to state-of-the-art zero-shot VAD approaches. ALFA's effectiveness is validated through extensive experiments, demonstrating its capability to handle both image-level and pixel-level anomaly detection without requiring additional data or fine-tuning. The model's ability to adapt prompts in real-time and its efficient alignment strategy make it a promising solution for zero-shot VAD tasks.