This paper explores the impact of different noises on the quality of generated images using diffusion models. The authors hypothesize that not all Gaussian noises are equally effective and propose two novel methods: noise selection and noise optimization. Noise selection involves choosing noises with higher inversion stability, while noise optimization enhances the inversion stability of arbitrary noises. Extensive experiments on representative diffusion models like SDXL and SDXL-turbo demonstrate that these methods significantly improve the quality of generated images, as measured by both human preference and objective evaluation metrics. The proposed methods show winning rates of up to 57% and 72.5% in human preference tests, respectively, on the DrawBench dataset. The paper also discusses the broader implications and limitations of the findings, highlighting the need for further theoretical understanding and optimization strategies.This paper explores the impact of different noises on the quality of generated images using diffusion models. The authors hypothesize that not all Gaussian noises are equally effective and propose two novel methods: noise selection and noise optimization. Noise selection involves choosing noises with higher inversion stability, while noise optimization enhances the inversion stability of arbitrary noises. Extensive experiments on representative diffusion models like SDXL and SDXL-turbo demonstrate that these methods significantly improve the quality of generated images, as measured by both human preference and objective evaluation metrics. The proposed methods show winning rates of up to 57% and 72.5% in human preference tests, respectively, on the DrawBench dataset. The paper also discusses the broader implications and limitations of the findings, highlighting the need for further theoretical understanding and optimization strategies.