AutoBench: Automatic Testbench Generation and Evaluation Using LLMs for HDL Design

AutoBench: Automatic Testbench Generation and Evaluation Using LLMs for HDL Design

September 9–11, 2024 | Ruidi Qiu, Grace Li Zhang, Rolf Drechsler, Ulf Schlichtmann, Bing Li
AutoBench is an LLM-based testbench generation framework for digital circuit design that automatically generates comprehensive testbenches based on the description of the design under test (DUT). It introduces a hybrid testbench structure and a self-checking system using LLMs. AutoBench also includes an automated testbench evaluation framework, AutoEval, to assess the quality of generated testbenches from multiple perspectives. Experimental results show that AutoBench achieves a 57% improvement in the testbench pass@1 ratio compared to the baseline that directly generates testbenches using LLMs. For 75 sequential circuits, AutoBench has a 3.36 times higher pass@1 ratio compared to the baseline. The source codes and experimental results are open-sourced at https://github.com/AutoBench/AutoBench. AutoBench's workflow includes forward generation and self-enhancement stages. The forward generation stage involves generating the driver and checker components of the testbench, while the self-enhancement stage includes code completion, scenario checking, and auto-debugging. AutoEval evaluates the generated testbenches using multiple criteria, including syntactic correctness, preliminary correctness, and coverage-based metrics. The framework is validated on a dataset of RTL circuits, and the results demonstrate the effectiveness of AutoBench in generating high-quality testbenches for hardware verification.AutoBench is an LLM-based testbench generation framework for digital circuit design that automatically generates comprehensive testbenches based on the description of the design under test (DUT). It introduces a hybrid testbench structure and a self-checking system using LLMs. AutoBench also includes an automated testbench evaluation framework, AutoEval, to assess the quality of generated testbenches from multiple perspectives. Experimental results show that AutoBench achieves a 57% improvement in the testbench pass@1 ratio compared to the baseline that directly generates testbenches using LLMs. For 75 sequential circuits, AutoBench has a 3.36 times higher pass@1 ratio compared to the baseline. The source codes and experimental results are open-sourced at https://github.com/AutoBench/AutoBench. AutoBench's workflow includes forward generation and self-enhancement stages. The forward generation stage involves generating the driver and checker components of the testbench, while the self-enhancement stage includes code completion, scenario checking, and auto-debugging. AutoEval evaluates the generated testbenches using multiple criteria, including syntactic correctness, preliminary correctness, and coverage-based metrics. The framework is validated on a dataset of RTL circuits, and the results demonstrate the effectiveness of AutoBench in generating high-quality testbenches for hardware verification.
Reach us at info@study.space
[slides] AutoBench%3A Automatic Testbench Generation and Evaluation Using LLMs for HDL Design | StudySpace