Delving into Parameter-Efficient Fine-Tuning in Code Change Learning: An Empirical Study

Delving into Parameter-Efficient Fine-Tuning in Code Change Learning: An Empirical Study

9 Feb 2024 | Shuo Liu†, Jacky Keung†, Zhen Yang‡*, Fang Liu§*, Qilin Zhou†, and Yihan Liao†
This paper explores the effectiveness of Parameter-Efficient Fine-Tuning (PEFT) methods, specifically Adapter Tuning (AT) and Low-Rank Adaptation (LoRA), in code-change-related tasks compared to Full-Model Fine-Tuning (FMFT). The study focuses on two tasks: Just-In-Time Defect Prediction (JIT-DP) and Commit Message Generation (CMG). The results show that AT and LoRA achieve state-of-the-art performance in JIT-DP, with improvements of 8.39% and 9.87% in F1 score, respectively, when incorporating expert features. In CMG, both methods perform similarly to FMFT, but with less training time and memory consumption. The study also examines the cross-lingual and low-resource scenarios, where AT and LoRA outperform FMFT, indicating their adaptability to diverse programming languages and limited data. Probing tasks further reveal that AT and LoRA effectively encode both static and dynamic code semantics, contributing to their superior performance in code-change tasks. The findings suggest that PEFT methods can be a practical alternative to FMFT, especially in resource-constrained environments.This paper explores the effectiveness of Parameter-Efficient Fine-Tuning (PEFT) methods, specifically Adapter Tuning (AT) and Low-Rank Adaptation (LoRA), in code-change-related tasks compared to Full-Model Fine-Tuning (FMFT). The study focuses on two tasks: Just-In-Time Defect Prediction (JIT-DP) and Commit Message Generation (CMG). The results show that AT and LoRA achieve state-of-the-art performance in JIT-DP, with improvements of 8.39% and 9.87% in F1 score, respectively, when incorporating expert features. In CMG, both methods perform similarly to FMFT, but with less training time and memory consumption. The study also examines the cross-lingual and low-resource scenarios, where AT and LoRA outperform FMFT, indicating their adaptability to diverse programming languages and limited data. Probing tasks further reveal that AT and LoRA effectively encode both static and dynamic code semantics, contributing to their superior performance in code-change tasks. The findings suggest that PEFT methods can be a practical alternative to FMFT, especially in resource-constrained environments.
Reach us at info@study.space
Understanding Delving into Parameter-Efficient Fine-Tuning in Code Change Learning%3A An Empirical Study