AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling

AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling

7 Mar 2024 | Jun Zhan, Junqi Dai, Jiasheng Ye, Yunhua Zhou, Dong Zhang, Zhigeng Liu, Xin Zhang, Ruibin Yuan, Ge Zhang, Linyang Li, Hang Yan, Jie Fu, Tao Gui, Tianxiang Sun, Yugang Jiang, Xipeng Qiu
**AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling** **Authors:** Jun Zhan, Junqi Dai, Jiasheng Ye, Yunhua Zhou, Dong Zhang, Zhigeng Liu, Xin Zhang, Ruibin Yuan, Ge Zhang, Linyang Li, Hang Yan, Jie Fu, Tao Gui, Tianxiang Sun, Yugang Jiang, Xipeng Qiu **Institution:** Fudan University, Multimodal Art Projection Research Community, Shanghai AI Laboratory **Abstract:** AnyGPT is an any-to-any multimodal language model that processes various modalities (speech, text, images, music) using discrete representations. It can be trained without altering the existing large language model (LLM) architecture or training paradigms, relying solely on data-level preprocessing. The model uses multimodal tokenizers to compress raw data into discrete semantic tokens, enabling unified processing at the semantic level. A text-centric multimodal alignment dataset is built for pre-training, and a large-scale multimodal instruction dataset, AnyInstruct-108k, is synthesized to handle arbitrary combinations of multimodal inputs and outputs. Experimental results show that AnyGPT achieves zero-shot performance comparable to specialized models across all modalities, demonstrating the effectiveness of discrete representations in unifying multiple modalities within a language model. **Contributions:** - Proposes AnyGPT, a token-based any-to-any multimodal language model. - Develops a pipeline to build AnyInstruct-108k, a large-scale multimodal instruction dataset. - Demonstrates that discrete representations can effectively unify multiple modalities within a language model. **Related Work:** - Reviews existing multimodal LLMs and multimodal discretization techniques. - Discusses challenges and limitations in multimodal data collection and processing. **Methodology:** - **Tokenization:** Utilizes image, speech, and music tokenizers to convert raw data into discrete tokens. - **Language Model Backbone:** Expands the vocabulary with modality-specific tokens and trains the core LLM using next token prediction loss. - **Multimodal Generation:** Uses a two-stage framework for high-fidelity generation, combining semantic and perceptual information modeling. **Data:** - **Pre-training Data:** Collects multimodal data from various sources, including image-text pairs, speech-text pairs, and music-text pairs. - **Multimodal Interleaved Instruction Data:** Synthesizes 108k multi-turn conversations with interleaved multimodal elements using GPT-4 and advanced generative models. **Evaluation:** - Conducts zero-shot evaluations on image captioning, speech recognition, text-to-speech, and music understanding and generation tasks. - Provides compelling conversation examples showcasing AnyGPT's ability to handle multimodal inputs and outputs. **Conclusion:** AnyGPT is a unified multimodal LLM that leverages discrete representations to process various modalities**AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling** **Authors:** Jun Zhan, Junqi Dai, Jiasheng Ye, Yunhua Zhou, Dong Zhang, Zhigeng Liu, Xin Zhang, Ruibin Yuan, Ge Zhang, Linyang Li, Hang Yan, Jie Fu, Tao Gui, Tianxiang Sun, Yugang Jiang, Xipeng Qiu **Institution:** Fudan University, Multimodal Art Projection Research Community, Shanghai AI Laboratory **Abstract:** AnyGPT is an any-to-any multimodal language model that processes various modalities (speech, text, images, music) using discrete representations. It can be trained without altering the existing large language model (LLM) architecture or training paradigms, relying solely on data-level preprocessing. The model uses multimodal tokenizers to compress raw data into discrete semantic tokens, enabling unified processing at the semantic level. A text-centric multimodal alignment dataset is built for pre-training, and a large-scale multimodal instruction dataset, AnyInstruct-108k, is synthesized to handle arbitrary combinations of multimodal inputs and outputs. Experimental results show that AnyGPT achieves zero-shot performance comparable to specialized models across all modalities, demonstrating the effectiveness of discrete representations in unifying multiple modalities within a language model. **Contributions:** - Proposes AnyGPT, a token-based any-to-any multimodal language model. - Develops a pipeline to build AnyInstruct-108k, a large-scale multimodal instruction dataset. - Demonstrates that discrete representations can effectively unify multiple modalities within a language model. **Related Work:** - Reviews existing multimodal LLMs and multimodal discretization techniques. - Discusses challenges and limitations in multimodal data collection and processing. **Methodology:** - **Tokenization:** Utilizes image, speech, and music tokenizers to convert raw data into discrete tokens. - **Language Model Backbone:** Expands the vocabulary with modality-specific tokens and trains the core LLM using next token prediction loss. - **Multimodal Generation:** Uses a two-stage framework for high-fidelity generation, combining semantic and perceptual information modeling. **Data:** - **Pre-training Data:** Collects multimodal data from various sources, including image-text pairs, speech-text pairs, and music-text pairs. - **Multimodal Interleaved Instruction Data:** Synthesizes 108k multi-turn conversations with interleaved multimodal elements using GPT-4 and advanced generative models. **Evaluation:** - Conducts zero-shot evaluations on image captioning, speech recognition, text-to-speech, and music understanding and generation tasks. - Provides compelling conversation examples showcasing AnyGPT's ability to handle multimodal inputs and outputs. **Conclusion:** AnyGPT is a unified multimodal LLM that leverages discrete representations to process various modalities
Reach us at info@study.space