September 9–11, 2024 | Ruidi Qiu, Grace Li Zhang, Rolf Drechsler, Ulf Schlichtmann, Bing Li
AutoBench is a novel framework that automates the generation and evaluation of testbenches for hardware design using Large Language Models (LLMs). Traditional testbench generation methods are often manual and inefficient, leading to time-consuming and costly verification processes. AutoBench addresses these challenges by leveraging LLMs to automatically generate comprehensive testbenches based solely on the description of the design under test (DUT). The framework includes a hybrid testbench structure and a self-checking system, which are realized using LLMs. To evaluate the quality of the generated testbenches, AutoBench also introduces an automated evaluation framework called AutoEval, which assesses the testbenches from multiple perspectives. Experimental results demonstrate that AutoBench achieves a 57% improvement in the testbench pass@1 ratio compared to a baseline that directly generates testbenches using LLMs. For 75 sequential circuits, AutoBench achieves a 3.36 times improvement in the testbench pass@1 ratio compared to the baseline. The source code and experimental results are open-sourced at https://github.com/AutoBench/AutoBench.AutoBench is a novel framework that automates the generation and evaluation of testbenches for hardware design using Large Language Models (LLMs). Traditional testbench generation methods are often manual and inefficient, leading to time-consuming and costly verification processes. AutoBench addresses these challenges by leveraging LLMs to automatically generate comprehensive testbenches based solely on the description of the design under test (DUT). The framework includes a hybrid testbench structure and a self-checking system, which are realized using LLMs. To evaluate the quality of the generated testbenches, AutoBench also introduces an automated evaluation framework called AutoEval, which assesses the testbenches from multiple perspectives. Experimental results demonstrate that AutoBench achieves a 57% improvement in the testbench pass@1 ratio compared to a baseline that directly generates testbenches using LLMs. For 75 sequential circuits, AutoBench achieves a 3.36 times improvement in the testbench pass@1 ratio compared to the baseline. The source code and experimental results are open-sourced at https://github.com/AutoBench/AutoBench.