| Edmund Clarke, Orna Grumberg, Somesh Jha, Yuan Lu, and Helmut Veith
This paper presents a counterexample-guided abstraction refinement (CEGAR) methodology for symbolic model checking. The approach automatically generates an initial abstract model by analyzing the control structures of the program to be verified. Abstract models may produce erroneous (spurious) counterexamples, which are then used to refine the abstract model. The refinement process keeps the abstract state space small by using abstraction functions that distinguish many degrees of abstraction for each program variable. The methodology is implemented in NuSMV and has been tested on a large Fujitsu IP core design with about 500 latches and 10,000 lines of SMV code, confirming its effectiveness.
The CEGAR approach involves generating an initial abstraction, model-checking the abstract model, and refining the abstraction if a spurious counterexample is found. The refinement process identifies the shortest prefix of the counterexample that does not correspond to an actual trace in the concrete model and splits the abstract state to eliminate the spurious counterexample. The refinement algorithm is guaranteed to eliminate spurious counterexamples while keeping the abstract state space small. The methodology is complete for a fragment of ACTL* that includes counterexamples that are either paths or loops.
The paper describes the theoretical foundations of the CEGAR approach, including the use of abstraction functions, the identification of spurious counterexamples, and the refinement of the abstraction function. It also discusses performance improvements, including the use of BDDs for symbolic representation and two-phase refinement algorithms for loop counterexamples. The methodology has been applied to various benchmark designs and an industrial design of a multimedia processor from Fujitsu, demonstrating its effectiveness in verifying large designs. The approach is general and can be adapted for other forms of abstraction. Future work includes extending the refinement algorithm to handle more complex counterexamples and exploring the application of symbolic model checking without BDDs.This paper presents a counterexample-guided abstraction refinement (CEGAR) methodology for symbolic model checking. The approach automatically generates an initial abstract model by analyzing the control structures of the program to be verified. Abstract models may produce erroneous (spurious) counterexamples, which are then used to refine the abstract model. The refinement process keeps the abstract state space small by using abstraction functions that distinguish many degrees of abstraction for each program variable. The methodology is implemented in NuSMV and has been tested on a large Fujitsu IP core design with about 500 latches and 10,000 lines of SMV code, confirming its effectiveness.
The CEGAR approach involves generating an initial abstraction, model-checking the abstract model, and refining the abstraction if a spurious counterexample is found. The refinement process identifies the shortest prefix of the counterexample that does not correspond to an actual trace in the concrete model and splits the abstract state to eliminate the spurious counterexample. The refinement algorithm is guaranteed to eliminate spurious counterexamples while keeping the abstract state space small. The methodology is complete for a fragment of ACTL* that includes counterexamples that are either paths or loops.
The paper describes the theoretical foundations of the CEGAR approach, including the use of abstraction functions, the identification of spurious counterexamples, and the refinement of the abstraction function. It also discusses performance improvements, including the use of BDDs for symbolic representation and two-phase refinement algorithms for loop counterexamples. The methodology has been applied to various benchmark designs and an industrial design of a multimedia processor from Fujitsu, demonstrating its effectiveness in verifying large designs. The approach is general and can be adapted for other forms of abstraction. Future work includes extending the refinement algorithm to handle more complex counterexamples and exploring the application of symbolic model checking without BDDs.