Delving into Parameter-Efficient Fine-Tuning in Code Change Learning: An Empirical Study

Delving into Parameter-Efficient Fine-Tuning in Code Change Learning: An Empirical Study

9 Feb 2024 | Shuo Liu, Jacky Keung, Zhen Yang, Fang Liu, Qilin Zhou, and Yihan Liao
This paper investigates the effectiveness of Parameter-Efficient Fine-Tuning (PEFT) methods, specifically Adapter Tuning (AT) and Low-Rank Adaptation (LoRA), in code-change-related tasks compared to Full-Model Fine-Tuning (FMFT). The study evaluates these methods on two widely-studied tasks: Just-In-Time Defect Prediction (JIT-DP) and Commit Message Generation (CMG). The results show that both AT and LoRA achieve state-of-the-art (SOTA) performance in JIT-DP and comparable results in CMG when compared to FMFT and other SOTA approaches. Additionally, AT and LoRA demonstrate superiority in cross-lingual and low-resource scenarios. The study also conducts three probing tasks to explore the efficacy of PEFT techniques from both static and dynamic perspectives. The findings indicate that PEFT, particularly through AT and LoRA, offers promising advantages in code-change-related tasks, surpassing FMFT in certain aspects. The research contributes to a deeper understanding of the capabilities of PEFT in leveraging pre-trained PLMs for dynamic code changes. The study highlights that PEFT methods are more efficient in terms of training time and memory consumption, making them suitable for practical applications, especially in low-resource and cross-lingual scenarios. The results suggest that PEFT techniques can effectively adapt to code-change-related tasks with fewer parameters, demonstrating their potential as a practical alternative to FMFT.This paper investigates the effectiveness of Parameter-Efficient Fine-Tuning (PEFT) methods, specifically Adapter Tuning (AT) and Low-Rank Adaptation (LoRA), in code-change-related tasks compared to Full-Model Fine-Tuning (FMFT). The study evaluates these methods on two widely-studied tasks: Just-In-Time Defect Prediction (JIT-DP) and Commit Message Generation (CMG). The results show that both AT and LoRA achieve state-of-the-art (SOTA) performance in JIT-DP and comparable results in CMG when compared to FMFT and other SOTA approaches. Additionally, AT and LoRA demonstrate superiority in cross-lingual and low-resource scenarios. The study also conducts three probing tasks to explore the efficacy of PEFT techniques from both static and dynamic perspectives. The findings indicate that PEFT, particularly through AT and LoRA, offers promising advantages in code-change-related tasks, surpassing FMFT in certain aspects. The research contributes to a deeper understanding of the capabilities of PEFT in leveraging pre-trained PLMs for dynamic code changes. The study highlights that PEFT methods are more efficient in terms of training time and memory consumption, making them suitable for practical applications, especially in low-resource and cross-lingual scenarios. The results suggest that PEFT techniques can effectively adapt to code-change-related tasks with fewer parameters, demonstrating their potential as a practical alternative to FMFT.
Reach us at info@study.space
[slides] Delving into Parameter-Efficient Fine-Tuning in Code Change Learning%3A An Empirical Study | StudySpace