MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning

MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning

2024 | Bingchang Liu, Chaoyu Chen, Zi Gong, Cong Liao, Huan Wang, Zhichao Lei, Ming Liang, Dajun Chen, Min Shen, Hailian Zhou, Wei Jiang, Hang Yu, Jianguo Li
**MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning** **Authors:** Bingchang Liu, Chaoyu Chen, Zi Gong, Cong Liao, Huan Wang, Zhichao Lei, Ming Liang, Dajun Chen, Min Shen, Hailian Zhou, Wei Jiang, Hang Yu, and Jianguo Li **Abstract:** This paper introduces MFTCoder, a multitask fine-tuning framework designed to enhance the coding capabilities of large language models (LLMs). Traditional fine-tuning approaches for specific downstream tasks are resource-intensive and fail to leverage the interconnectedness among different code-related tasks. MFTCoder addresses these limitations by enabling simultaneous and parallel fine-tuning on multiple tasks, incorporating various loss functions to handle data imbalance, varying difficulty levels, and inconsistent convergence speeds. Extensive experiments demonstrate that MFTCoder outperforms both individual fine-tuning and mixed ensemble fine-tuning. It also offers efficient training capabilities, including data tokenization modes and parameter-efficient fine-tuning (PEFT) techniques, significantly improving training speed. MFTCoder integrates seamlessly with mainstream open-source LLMs like CodeLlama and Qwen, and the MFTCoder-fine-tuned CodeFuse-DeepSeek-33B model achieved the top spot on the Big Code Models Leaderboard as of January 30, 2024. **Keywords:** Large Language Model; Code Generation; Multi-task Learning **Contributions:** - Introduces MFTCoder, an innovative multitask fine-tuning strategy that adapts LLMs to diverse coding tasks. - Validates MFTCoder on various baseline pretrained models, demonstrating its compatibility and effectiveness. - Shows that MFTCoder outperforms traditional single-task and mixed-task fine-tuning methods in multiple experiments. **Evaluation:** - Conducted multiple experiments using MFTCoder to validate its effectiveness and superiority over single-task and mixed-task fine-tuning methods. - Addressed research questions on performance, generalization, and resource efficiency. - Compared MFTCoder with other models on various code-related tasks, including code completion, text-to-code generation, code comment generation, code translation, and unit test case generation. - Showed that MFTCoder consistently outperformed other methods in terms of accuracy and generalization to unseen tasks. **Application:** - Leveraged MFTCoder to fine-tune existing open-source LLM models, achieving significant improvements in coding tasks. - Developed CodeFuse, a programming assistant featuring web and IDE plugins, with 12,000 weekly active users and AI generating nearly 80,000 lines of code weekly. **Discussion and Outlook:** - MFTCoder effectively tackles issues such as data imbalance, task disparity, and uneven convergence rates. - Future research will focus on refining task delineation guidelines and exploring more efficient multi-task optimization strategies. - The framework's scalability and adaptability to**MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning** **Authors:** Bingchang Liu, Chaoyu Chen, Zi Gong, Cong Liao, Huan Wang, Zhichao Lei, Ming Liang, Dajun Chen, Min Shen, Hailian Zhou, Wei Jiang, Hang Yu, and Jianguo Li **Abstract:** This paper introduces MFTCoder, a multitask fine-tuning framework designed to enhance the coding capabilities of large language models (LLMs). Traditional fine-tuning approaches for specific downstream tasks are resource-intensive and fail to leverage the interconnectedness among different code-related tasks. MFTCoder addresses these limitations by enabling simultaneous and parallel fine-tuning on multiple tasks, incorporating various loss functions to handle data imbalance, varying difficulty levels, and inconsistent convergence speeds. Extensive experiments demonstrate that MFTCoder outperforms both individual fine-tuning and mixed ensemble fine-tuning. It also offers efficient training capabilities, including data tokenization modes and parameter-efficient fine-tuning (PEFT) techniques, significantly improving training speed. MFTCoder integrates seamlessly with mainstream open-source LLMs like CodeLlama and Qwen, and the MFTCoder-fine-tuned CodeFuse-DeepSeek-33B model achieved the top spot on the Big Code Models Leaderboard as of January 30, 2024. **Keywords:** Large Language Model; Code Generation; Multi-task Learning **Contributions:** - Introduces MFTCoder, an innovative multitask fine-tuning strategy that adapts LLMs to diverse coding tasks. - Validates MFTCoder on various baseline pretrained models, demonstrating its compatibility and effectiveness. - Shows that MFTCoder outperforms traditional single-task and mixed-task fine-tuning methods in multiple experiments. **Evaluation:** - Conducted multiple experiments using MFTCoder to validate its effectiveness and superiority over single-task and mixed-task fine-tuning methods. - Addressed research questions on performance, generalization, and resource efficiency. - Compared MFTCoder with other models on various code-related tasks, including code completion, text-to-code generation, code comment generation, code translation, and unit test case generation. - Showed that MFTCoder consistently outperformed other methods in terms of accuracy and generalization to unseen tasks. **Application:** - Leveraged MFTCoder to fine-tune existing open-source LLM models, achieving significant improvements in coding tasks. - Developed CodeFuse, a programming assistant featuring web and IDE plugins, with 12,000 weekly active users and AI generating nearly 80,000 lines of code weekly. **Discussion and Outlook:** - MFTCoder effectively tackles issues such as data imbalance, task disparity, and uneven convergence rates. - Future research will focus on refining task delineation guidelines and exploring more efficient multi-task optimization strategies. - The framework's scalability and adaptability to
Reach us at info@study.space