MusicMagus: Zero-Shot Text-to-Music Editing via Diffusion Models

MusicMagus: Zero-Shot Text-to-Music Editing via Diffusion Models

28 May 2024 | Yixiao Zhang¹, Yukara Ikemiya², Gus Xia³, Naoki Murata², Marco A. Martínez-Ramírez², Wei-Hsiang Liao², Yuki Mitsufuji², Simon Dixon¹
MusicMagus is a novel text-to-music editing framework that enables zero-shot modification of specific musical attributes, such as genre, mood, and instrument, while preserving other aspects. It leverages pre-trained diffusion models to perform text-based intra-stem editing without requiring additional training. The method involves transforming text editing into latent space manipulation and adding an extra constraint to ensure consistency. MusicMagus seamlessly integrates with existing pretrained text-to-music diffusion models and demonstrates superior performance in style and timbre transfer evaluations compared to both zero-shot and supervised baselines. The system is applicable in real-world music editing scenarios and can edit real-world music audio using DDIM inversion. The method introduces a flexible text-to-music editing approach using word swapping and adds constraints over the cross-attention map during diffusion to preserve the integrity of the remaining elements of the music. The system is evaluated on timbre and style transfer tasks, showing improved performance in both subjective and objective experiments. The results indicate that MusicMagus effectively maintains structural consistency and pitch accuracy while allowing for semantic changes. The system is capable of editing real-world music audio, although its performance may not match that of synthesized music audio generated from diffusion models. The current implementation is based on the AudioLDM 2 model, which has limitations in generating multi-instrument music and handling complex compositions. The system also faces challenges in audio quality and the stability of zero-shot methods. Future work aims to address these limitations and enhance the model's capabilities for more complex music generation and editing tasks.MusicMagus is a novel text-to-music editing framework that enables zero-shot modification of specific musical attributes, such as genre, mood, and instrument, while preserving other aspects. It leverages pre-trained diffusion models to perform text-based intra-stem editing without requiring additional training. The method involves transforming text editing into latent space manipulation and adding an extra constraint to ensure consistency. MusicMagus seamlessly integrates with existing pretrained text-to-music diffusion models and demonstrates superior performance in style and timbre transfer evaluations compared to both zero-shot and supervised baselines. The system is applicable in real-world music editing scenarios and can edit real-world music audio using DDIM inversion. The method introduces a flexible text-to-music editing approach using word swapping and adds constraints over the cross-attention map during diffusion to preserve the integrity of the remaining elements of the music. The system is evaluated on timbre and style transfer tasks, showing improved performance in both subjective and objective experiments. The results indicate that MusicMagus effectively maintains structural consistency and pitch accuracy while allowing for semantic changes. The system is capable of editing real-world music audio, although its performance may not match that of synthesized music audio generated from diffusion models. The current implementation is based on the AudioLDM 2 model, which has limitations in generating multi-instrument music and handling complex compositions. The system also faces challenges in audio quality and the stability of zero-shot methods. Future work aims to address these limitations and enhance the model's capabilities for more complex music generation and editing tasks.
Reach us at info@study.space