On the Challenges of Fuzzing Techniques via Large Language Models

On the Challenges of Fuzzing Techniques via Large Language Models

18 May 2025 | Linghan Huang*, Peizhou Zhao*, Lei Ma†, Huaming Chen*
This paper presents a comprehensive review of the integration of large language models (LLMs) into fuzzing test techniques. It discusses the challenges and opportunities of using LLMs to enhance traditional fuzzing methods, focusing on two main approaches: LLM-based fuzzers and fine-tuned fuzzers. The paper analyzes the performance of these methods in terms of code coverage, computational efficiency, and the ability to detect complex errors. It also highlights the advantages of LLM-based fuzzers over traditional ones, including higher API and code coverage, improved efficiency, and the capability to detect more complex errors. However, the paper also addresses challenges such as hallucinations in LLM outputs, the need for better benchmarking, and the computational costs associated with LLMs. The study concludes that while LLM-based fuzzing shows great potential, further research is needed to address these challenges and improve the effectiveness and efficiency of fuzzing tests. The paper also discusses future directions, including the potential for full automation and hardware testing applications of LLM-based fuzzing. Overall, the paper provides a detailed overview of the current state of LLM-based fuzzing and outlines key research areas for future development.This paper presents a comprehensive review of the integration of large language models (LLMs) into fuzzing test techniques. It discusses the challenges and opportunities of using LLMs to enhance traditional fuzzing methods, focusing on two main approaches: LLM-based fuzzers and fine-tuned fuzzers. The paper analyzes the performance of these methods in terms of code coverage, computational efficiency, and the ability to detect complex errors. It also highlights the advantages of LLM-based fuzzers over traditional ones, including higher API and code coverage, improved efficiency, and the capability to detect more complex errors. However, the paper also addresses challenges such as hallucinations in LLM outputs, the need for better benchmarking, and the computational costs associated with LLMs. The study concludes that while LLM-based fuzzing shows great potential, further research is needed to address these challenges and improve the effectiveness and efficiency of fuzzing tests. The paper also discusses future directions, including the potential for full automation and hardware testing applications of LLM-based fuzzing. Overall, the paper provides a detailed overview of the current state of LLM-based fuzzing and outlines key research areas for future development.
Reach us at info@study.space