Towards Memorization-Free Diffusion Models

Towards Memorization-Free Diffusion Models

1 Apr 2024 | Chen Chen, Daochang Liu, Chang Xu
The paper "Towards Memorization-Free Diffusion Models" by Chen Chen, Daochang Liu, and Xu introduces Anti-Memorization Guidance (AMG), a novel framework designed to address the issue of memorization in pre-trained diffusion models. Memorization, where models replicate training data during inference, poses significant legal and ethical risks due to copyright and privacy concerns. AMG employs three targeted guidance strategies: despecification guidance ($G_{spe}$), caption deduplication guidance ($G_{dup}$), and dissimilarity guidance ($G_{sim}$), each addressing specific causes of memorization. These strategies work together to ensure that generated outputs are memorization-free while maintaining high image quality and text alignment. AMG features an automatic detection system that continuously assesses the similarity between the current prediction and training data during inference, allowing for selective application of guidance strategies. This approach minimally interferes with the original sampling process, preserving output utility. The effectiveness of AMG is demonstrated through experiments on Denoising Diffusion Probabilistic Models (DDPM) and Stable Diffusion across various generation tasks, including unconditional, class-conditional, and text-conditional generations. The results show that AMG successfully eradicates memorization with minimal impact on image quality and text alignment, as evidenced by FID and CLIP scores. The paper also provides a detailed analysis of the contributions of each guidance strategy, highlighting their roles in achieving an optimal privacy-utility trade-off. Additionally, it includes ablation studies and comparisons with baselines to validate the effectiveness of AMG. The supplementary material offers further insights into the synergistic use of AMG, additional evaluation methods, and implementation details.The paper "Towards Memorization-Free Diffusion Models" by Chen Chen, Daochang Liu, and Xu introduces Anti-Memorization Guidance (AMG), a novel framework designed to address the issue of memorization in pre-trained diffusion models. Memorization, where models replicate training data during inference, poses significant legal and ethical risks due to copyright and privacy concerns. AMG employs three targeted guidance strategies: despecification guidance ($G_{spe}$), caption deduplication guidance ($G_{dup}$), and dissimilarity guidance ($G_{sim}$), each addressing specific causes of memorization. These strategies work together to ensure that generated outputs are memorization-free while maintaining high image quality and text alignment. AMG features an automatic detection system that continuously assesses the similarity between the current prediction and training data during inference, allowing for selective application of guidance strategies. This approach minimally interferes with the original sampling process, preserving output utility. The effectiveness of AMG is demonstrated through experiments on Denoising Diffusion Probabilistic Models (DDPM) and Stable Diffusion across various generation tasks, including unconditional, class-conditional, and text-conditional generations. The results show that AMG successfully eradicates memorization with minimal impact on image quality and text alignment, as evidenced by FID and CLIP scores. The paper also provides a detailed analysis of the contributions of each guidance strategy, highlighting their roles in achieving an optimal privacy-utility trade-off. Additionally, it includes ablation studies and comparisons with baselines to validate the effectiveness of AMG. The supplementary material offers further insights into the synergistic use of AMG, additional evaluation methods, and implementation details.
Reach us at info@study.space