Understanding Hallucinations in Diffusion Models through Mode Interpolation

Understanding Hallucinations in Diffusion Models through Mode Interpolation

25 Aug 2024 | Sumukh K Aithal, Pratyush Maini, Zachary C. Lipton, J. Zico Kolter
This paper investigates the phenomenon of "hallucinations" in diffusion models, which are samples that do not exist in the training data. The authors introduce the concept of "mode interpolation," where diffusion models smoothly interpolate between nearby data modes to generate samples outside the original training distribution. They systematically study this phenomenon through experiments on 1D and 2D Gaussians, demonstrating that a discontinuous loss landscape in the decoder leads to hallucinations. The paper also explores how hallucinations occur in real-world datasets, such as generating images with additional or missing fingers in hand datasets. The authors propose a metric to detect hallucinations by analyzing the variance in the trajectory of the predicted sample value during reverse diffusion, which can effectively remove over 95% of hallucinations while retaining 96% of in-support samples. Finally, they discuss the implications of hallucinations on recursive generative model training, showing that their proposed metric can mitigate hallucinations and stabilize recursive training.This paper investigates the phenomenon of "hallucinations" in diffusion models, which are samples that do not exist in the training data. The authors introduce the concept of "mode interpolation," where diffusion models smoothly interpolate between nearby data modes to generate samples outside the original training distribution. They systematically study this phenomenon through experiments on 1D and 2D Gaussians, demonstrating that a discontinuous loss landscape in the decoder leads to hallucinations. The paper also explores how hallucinations occur in real-world datasets, such as generating images with additional or missing fingers in hand datasets. The authors propose a metric to detect hallucinations by analyzing the variance in the trajectory of the predicted sample value during reverse diffusion, which can effectively remove over 95% of hallucinations while retaining 96% of in-support samples. Finally, they discuss the implications of hallucinations on recursive generative model training, showing that their proposed metric can mitigate hallucinations and stabilize recursive training.
Reach us at info@study.space
[slides and audio] Understanding Hallucinations in Diffusion Models through Mode Interpolation