Towards Memorization-Free Diffusion Models

Towards Memorization-Free Diffusion Models

1 Apr 2024 | Chen Chen, Daochang Liu, Chang Xu
This paper introduces Anti-Memorization Guidance (AMG), a novel framework that addresses the issue of memorization in diffusion models. Diffusion models, known for their ability to generate high-quality images, can inadvertently memorize and reproduce training data, leading to legal and ethical concerns. AMG employs three targeted guidance strategies to mitigate memorization: desspecification guidance (Gspe), caption deduplication guidance (Gdup), and dissimilarity guidance (Gsim). These strategies work together to ensure memorization-free outputs while maintaining image quality and text alignment. AMG also features an automatic detection system that identifies potential memorization during inference, allowing for selective application of guidance strategies without interfering with the original sampling process. The framework was tested on pretrained Denoising Diffusion Probabilistic Models (DDPM) and Stable Diffusion across various generation tasks, demonstrating that AMG successfully eliminates all instances of memorization with minimal impact on image quality and text alignment. The results show that AMG is the first method to achieve this, offering a balanced approach between privacy and utility. The paper also discusses the causes of memorization in diffusion models, including overly specific user prompts, duplicated training images, and duplicated captions. The study highlights the importance of each guidance strategy in the framework and provides insights into the effectiveness of AMG in reducing memorization while preserving output quality. The experiments show that AMG outperforms existing methods in terms of memorization reduction and maintains high-quality outputs. The paper concludes that AMG offers a flexible and effective solution for generating memorization-free images while preserving the utility of diffusion models.This paper introduces Anti-Memorization Guidance (AMG), a novel framework that addresses the issue of memorization in diffusion models. Diffusion models, known for their ability to generate high-quality images, can inadvertently memorize and reproduce training data, leading to legal and ethical concerns. AMG employs three targeted guidance strategies to mitigate memorization: desspecification guidance (Gspe), caption deduplication guidance (Gdup), and dissimilarity guidance (Gsim). These strategies work together to ensure memorization-free outputs while maintaining image quality and text alignment. AMG also features an automatic detection system that identifies potential memorization during inference, allowing for selective application of guidance strategies without interfering with the original sampling process. The framework was tested on pretrained Denoising Diffusion Probabilistic Models (DDPM) and Stable Diffusion across various generation tasks, demonstrating that AMG successfully eliminates all instances of memorization with minimal impact on image quality and text alignment. The results show that AMG is the first method to achieve this, offering a balanced approach between privacy and utility. The paper also discusses the causes of memorization in diffusion models, including overly specific user prompts, duplicated training images, and duplicated captions. The study highlights the importance of each guidance strategy in the framework and provides insights into the effectiveness of AMG in reducing memorization while preserving output quality. The experiments show that AMG outperforms existing methods in terms of memorization reduction and maintains high-quality outputs. The paper concludes that AMG offers a flexible and effective solution for generating memorization-free images while preserving the utility of diffusion models.
Reach us at info@study.space
[slides and audio] Towards Memorization-Free Diffusion Models