This paper introduces a new privacy risk called "Shake to Leak" (S2L) in diffusion models, which can amplify existing privacy risks through fine-tuning. S2L occurs when pre-trained diffusion models are fine-tuned using manipulated data, leading to increased membership inference attacks (MIA) and data extraction. The authors demonstrate that S2L can be effective in various fine-tuning strategies, including concept-injection methods (DreamBooth and Textual Inversion) and parameter-efficient methods (LoRA and Hypernetwork). In the worst case, S2L can increase the AUC of MIA by 5.4% and the number of extracted private samples from almost 0 to 15.8 on average per target domain. The study highlights the severity of privacy risks associated with diffusion models and underscores the need for robust defense strategies. The paper also includes extensive ablation studies to understand the conditions under which S2L occurs and the impact of different fine-tuning methods and prior knowledge.This paper introduces a new privacy risk called "Shake to Leak" (S2L) in diffusion models, which can amplify existing privacy risks through fine-tuning. S2L occurs when pre-trained diffusion models are fine-tuned using manipulated data, leading to increased membership inference attacks (MIA) and data extraction. The authors demonstrate that S2L can be effective in various fine-tuning strategies, including concept-injection methods (DreamBooth and Textual Inversion) and parameter-efficient methods (LoRA and Hypernetwork). In the worst case, S2L can increase the AUC of MIA by 5.4% and the number of extracted private samples from almost 0 to 15.8 on average per target domain. The study highlights the severity of privacy risks associated with diffusion models and underscores the need for robust defense strategies. The paper also includes extensive ablation studies to understand the conditions under which S2L occurs and the impact of different fine-tuning methods and prior knowledge.