This paper addresses the issue of memorization in diffusion models, which can lead to unintended reproduction of training data, posing legal and ethical challenges. The authors propose a method to detect memorized prompts by analyzing the magnitude of text-conditional noise predictions, achieving high accuracy with minimal computational overhead. They also introduce an explainable approach to identify the tokens responsible for memorization, allowing users to adjust their prompts. Additionally, they present two mitigation strategies: one for inference by perturbing prompt embeddings and another for training by filtering out memorized data during inference. These methods effectively reduce memorization while maintaining high-quality generation. The paper includes experimental results and comparisons with existing techniques, demonstrating the effectiveness and efficiency of the proposed methods.This paper addresses the issue of memorization in diffusion models, which can lead to unintended reproduction of training data, posing legal and ethical challenges. The authors propose a method to detect memorized prompts by analyzing the magnitude of text-conditional noise predictions, achieving high accuracy with minimal computational overhead. They also introduce an explainable approach to identify the tokens responsible for memorization, allowing users to adjust their prompts. Additionally, they present two mitigation strategies: one for inference by perturbing prompt embeddings and another for training by filtering out memorized data during inference. These methods effectively reduce memorization while maintaining high-quality generation. The paper includes experimental results and comparisons with existing techniques, demonstrating the effectiveness and efficiency of the proposed methods.