This paper proposes a diffusion-based framework, DiAD, for multi-class anomaly detection. The framework consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network connected to the Stable Diffusion (SD) denoising network, and a feature-space pre-trained feature extractor. The SG network is designed to reconstruct anomalous regions while preserving the original image's semantic information. A Spatial-aware Feature Fusion (SFF) block is introduced to maximize reconstruction accuracy by integrating features at different scales. The input and reconstructed images are processed by a pre-trained feature extractor to generate anomaly maps based on features extracted at different scales. Experiments on the MVTec-AD and VisA datasets demonstrate that DiAD outperforms state-of-the-art methods, achieving 96.8/52.6 and 97.2/99.0 (AUROC/AP) for localization and detection respectively on the multi-class MVTec-AD dataset. The framework addresses the challenges of preserving image categories and pixel-wise structural integrity in multi-class anomaly detection by leveraging the powerful image generation capabilities of diffusion models. The SG network and SFF block are key components that enable effective reconstruction of anomalous regions while maintaining semantic consistency. The method is evaluated on multiple datasets and shows superior performance in both anomaly detection and localization tasks. The framework is also compared with existing methods, demonstrating its effectiveness in handling multi-class anomaly detection. The results indicate that DiAD achieves state-of-the-art performance in multi-class anomaly detection, significantly outperforming non-diffusion and diffusion-based methods.This paper proposes a diffusion-based framework, DiAD, for multi-class anomaly detection. The framework consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network connected to the Stable Diffusion (SD) denoising network, and a feature-space pre-trained feature extractor. The SG network is designed to reconstruct anomalous regions while preserving the original image's semantic information. A Spatial-aware Feature Fusion (SFF) block is introduced to maximize reconstruction accuracy by integrating features at different scales. The input and reconstructed images are processed by a pre-trained feature extractor to generate anomaly maps based on features extracted at different scales. Experiments on the MVTec-AD and VisA datasets demonstrate that DiAD outperforms state-of-the-art methods, achieving 96.8/52.6 and 97.2/99.0 (AUROC/AP) for localization and detection respectively on the multi-class MVTec-AD dataset. The framework addresses the challenges of preserving image categories and pixel-wise structural integrity in multi-class anomaly detection by leveraging the powerful image generation capabilities of diffusion models. The SG network and SFF block are key components that enable effective reconstruction of anomalous regions while maintaining semantic consistency. The method is evaluated on multiple datasets and shows superior performance in both anomaly detection and localization tasks. The framework is also compared with existing methods, demonstrating its effectiveness in handling multi-class anomaly detection. The results indicate that DiAD achieves state-of-the-art performance in multi-class anomaly detection, significantly outperforming non-diffusion and diffusion-based methods.