This paper introduces two zero-shot audio editing techniques using pre-trained diffusion models: ZETA (text-based) and ZEUS (unsupervised). ZETA leverages text prompts to modify audio signals, enabling changes in style, genre, and instrumentation while maintaining perceptual quality. ZEUS discovers semantically meaningful editing directions without supervision, allowing for creative modifications like melody improvisation. Both methods use DDPM inversion to extract latent noise vectors, which are then manipulated to generate edited signals. ZETA applies text guidance by altering prompts, while ZEUS perturbs the denoiser output in directions of the top principal components of the posterior distribution. The methods are evaluated against state-of-the-art approaches, showing superior performance in generating semantically meaningful edits. The paper also discusses the limitations of these methods, including the lack of control over extracted principal components in unsupervised editing. The results demonstrate that the proposed methods outperform existing techniques in both qualitative and quantitative assessments, enabling more flexible and creative audio editing.This paper introduces two zero-shot audio editing techniques using pre-trained diffusion models: ZETA (text-based) and ZEUS (unsupervised). ZETA leverages text prompts to modify audio signals, enabling changes in style, genre, and instrumentation while maintaining perceptual quality. ZEUS discovers semantically meaningful editing directions without supervision, allowing for creative modifications like melody improvisation. Both methods use DDPM inversion to extract latent noise vectors, which are then manipulated to generate edited signals. ZETA applies text guidance by altering prompts, while ZEUS perturbs the denoiser output in directions of the top principal components of the posterior distribution. The methods are evaluated against state-of-the-art approaches, showing superior performance in generating semantically meaningful edits. The paper also discusses the limitations of these methods, including the lack of control over extracted principal components in unsupervised editing. The results demonstrate that the proposed methods outperform existing techniques in both qualitative and quantitative assessments, enabling more flexible and creative audio editing.