Score Distillation Sampling (SDS) is a method that uses image diffusion models to control optimization problems using text prompts. This paper analyzes the SDS loss function, identifies an issue with its formulation, and proposes a simple yet effective fix. The authors decompose the loss into different factors and isolate the component responsible for noisy gradients. Instead of using high text guidance to account for noise, they train a shallow network to mimic the timestep-dependent frequency bias of the image diffusion model, effectively removing it. The proposed loss, called LMC-SDS, provides cleaner gradients along the learned manifold of real images, leading to better results in various applications, including image synthesis, editing, zero-shot image translation network training, and text-to-3D synthesis. The paper demonstrates the effectiveness of LMC-SDS through qualitative and quantitative experiments.Score Distillation Sampling (SDS) is a method that uses image diffusion models to control optimization problems using text prompts. This paper analyzes the SDS loss function, identifies an issue with its formulation, and proposes a simple yet effective fix. The authors decompose the loss into different factors and isolate the component responsible for noisy gradients. Instead of using high text guidance to account for noise, they train a shallow network to mimic the timestep-dependent frequency bias of the image diffusion model, effectively removing it. The proposed loss, called LMC-SDS, provides cleaner gradients along the learned manifold of real images, leading to better results in various applications, including image synthesis, editing, zero-shot image translation network training, and text-to-3D synthesis. The paper demonstrates the effectiveness of LMC-SDS through qualitative and quantitative experiments.