The paper presents a novel Multi-Modal Reverse Distillation (MMRD) paradigm for multi-modal anomaly detection, which integrates an auxiliary modality (such as depth and surface normal maps) into RGB images to enhance anomaly detection capabilities. The MMRD paradigm consists of a frozen multi-modal teacher encoder and a learnable multi-modal student decoder. The teacher encoder extracts complementary visual features from different modalities using a siamese architecture and parameter-free fuses these features as distillation targets. The student decoder learns modality-related priors from normal data and performs interaction to produce multi-modal representations for target reconstruction. Extensive experiments on the MVTec 3D-AD and Eyecandies benchmarks demonstrate that the proposed MMRD outperforms state-of-the-art methods in both anomaly detection and localization. The main contributions of the paper are fourfold: developing the MMRD paradigm, designing the multi-modal teacher encoder, creating the multi-modal student decoder, and achieving state-of-the-art results on multi-modal anomaly detection benchmarks.The paper presents a novel Multi-Modal Reverse Distillation (MMRD) paradigm for multi-modal anomaly detection, which integrates an auxiliary modality (such as depth and surface normal maps) into RGB images to enhance anomaly detection capabilities. The MMRD paradigm consists of a frozen multi-modal teacher encoder and a learnable multi-modal student decoder. The teacher encoder extracts complementary visual features from different modalities using a siamese architecture and parameter-free fuses these features as distillation targets. The student decoder learns modality-related priors from normal data and performs interaction to produce multi-modal representations for target reconstruction. Extensive experiments on the MVTec 3D-AD and Eyecandies benchmarks demonstrate that the proposed MMRD outperforms state-of-the-art methods in both anomaly detection and localization. The main contributions of the paper are fourfold: developing the MMRD paradigm, designing the multi-modal teacher encoder, creating the multi-modal student decoder, and achieving state-of-the-art results on multi-modal anomaly detection benchmarks.