Multitask-based Evaluation of Open-Source LLM on Software Vulnerability

Multitask-based Evaluation of Open-Source LLM on Software Vulnerability

6 Jul 2024 | Xin Yin, Chao Ni*, and Shaohua Wang
This paper proposes a pipeline for quantitatively evaluating interactive Large Language Models (LLMs) using publicly available datasets. We conduct an extensive technical evaluation of LLMs using Big-Vul, covering four different common software vulnerability tasks. This evaluation assesses the multi-tasking capabilities of LLMs based on this dataset. We find that existing state-of-the-art approaches and pre-trained Language Models (LMs) are generally superior to LLMs in software vulnerability detection. However, in software vulnerability assessment and location, certain LLMs (e.g., CodeLlama and WizardCoder) have demonstrated superior performance compared to pre-trained LMs, and providing more contextual information can enhance the vulnerability assessment capabilities of LLMs. Moreover, LLMs exhibit strong vulnerability description capabilities, but their tendency to produce excessive output significantly weakens their performance compared to pre-trained LMs. Overall, though LLMs perform well in some aspects, they still need improvement in understanding the subtle differences in code vulnerabilities and the ability to describe vulnerabilities to fully realize their potential. Our evaluation pipeline provides valuable insights into the capabilities of LLMs in handling software vulnerabilities. The paper evaluates LLMs on four software vulnerability tasks: vulnerability detection, vulnerability assessment, vulnerability location, and vulnerability description. The results show that pre-trained LMs generally outperform LLMs in vulnerability detection, while certain LLMs (e.g., CodeLlama and WizardCoder) perform better in vulnerability assessment and location. LLMs show strong vulnerability description capabilities but tend to produce excessive output, which weakens their performance. Fine-tuning LLMs improves their performance compared to the few-shot setting. CodeLlama performs best in vulnerability detection, assessment, and location. The results indicate that LLMs can detect and assess vulnerabilities, but their performance is limited by the quality of pre-training data and model design. Providing more context information improves the performance of LLMs in vulnerability assessment. The paper also highlights the importance of fine-tuning LLMs for specific tasks and the need for further research to improve their ability to understand and describe code vulnerabilities. The evaluation pipeline provides a comprehensive understanding of LLMs' capabilities in handling software vulnerabilities.This paper proposes a pipeline for quantitatively evaluating interactive Large Language Models (LLMs) using publicly available datasets. We conduct an extensive technical evaluation of LLMs using Big-Vul, covering four different common software vulnerability tasks. This evaluation assesses the multi-tasking capabilities of LLMs based on this dataset. We find that existing state-of-the-art approaches and pre-trained Language Models (LMs) are generally superior to LLMs in software vulnerability detection. However, in software vulnerability assessment and location, certain LLMs (e.g., CodeLlama and WizardCoder) have demonstrated superior performance compared to pre-trained LMs, and providing more contextual information can enhance the vulnerability assessment capabilities of LLMs. Moreover, LLMs exhibit strong vulnerability description capabilities, but their tendency to produce excessive output significantly weakens their performance compared to pre-trained LMs. Overall, though LLMs perform well in some aspects, they still need improvement in understanding the subtle differences in code vulnerabilities and the ability to describe vulnerabilities to fully realize their potential. Our evaluation pipeline provides valuable insights into the capabilities of LLMs in handling software vulnerabilities. The paper evaluates LLMs on four software vulnerability tasks: vulnerability detection, vulnerability assessment, vulnerability location, and vulnerability description. The results show that pre-trained LMs generally outperform LLMs in vulnerability detection, while certain LLMs (e.g., CodeLlama and WizardCoder) perform better in vulnerability assessment and location. LLMs show strong vulnerability description capabilities but tend to produce excessive output, which weakens their performance. Fine-tuning LLMs improves their performance compared to the few-shot setting. CodeLlama performs best in vulnerability detection, assessment, and location. The results indicate that LLMs can detect and assess vulnerabilities, but their performance is limited by the quality of pre-training data and model design. Providing more context information improves the performance of LLMs in vulnerability assessment. The paper also highlights the importance of fine-tuning LLMs for specific tasks and the need for further research to improve their ability to understand and describe code vulnerabilities. The evaluation pipeline provides a comprehensive understanding of LLMs' capabilities in handling software vulnerabilities.
Reach us at info@study.space
[slides and audio] Multitask-Based Evaluation of Open-Source LLM on Software Vulnerability