Not All Noises Are Created Equally: Diffusion Noise Selection and Optimization

Not All Noises Are Created Equally: Diffusion Noise Selection and Optimization

27 Jul 2024 | Zipeng Qi¹, Lichen Bai¹, Haoyi Xiong², and Zeke Xie¹,†
This paper investigates the impact of noise selection and optimization on diffusion models, revealing that not all noises are equally effective. The authors propose two novel methods: noise selection based on inversion stability and noise optimization through gradient descent on noise space. These methods significantly improve the quality of generated images, as demonstrated by experiments on models like SDXL and SDXL-turbo. The noise selection method identifies stable noises with higher inversion stability, leading to better results. The noise optimization method enhances inversion stability by actively optimizing the noise, without requiring model retraining. Experiments show that these methods outperform baseline models in human preference and objective evaluation metrics, with noise optimization achieving up to 72.5% winning rates on DrawBench. The results highlight the importance of noise space in diffusion models and suggest that optimizing noise can lead to significant improvements in generated results. The paper also discusses the limitations of the proposed methods, including computational costs and the need for further theoretical analysis. Overall, the study provides a new perspective on improving diffusion models through noise space optimization.This paper investigates the impact of noise selection and optimization on diffusion models, revealing that not all noises are equally effective. The authors propose two novel methods: noise selection based on inversion stability and noise optimization through gradient descent on noise space. These methods significantly improve the quality of generated images, as demonstrated by experiments on models like SDXL and SDXL-turbo. The noise selection method identifies stable noises with higher inversion stability, leading to better results. The noise optimization method enhances inversion stability by actively optimizing the noise, without requiring model retraining. Experiments show that these methods outperform baseline models in human preference and objective evaluation metrics, with noise optimization achieving up to 72.5% winning rates on DrawBench. The results highlight the importance of noise space in diffusion models and suggest that optimizing noise can lead to significant improvements in generated results. The paper also discusses the limitations of the proposed methods, including computational costs and the need for further theoretical analysis. Overall, the study provides a new perspective on improving diffusion models through noise space optimization.
Reach us at info@study.space
Understanding Not All Noises Are Created Equally%3ADiffusion Noise Selection and Optimization