{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

cegar - Counterexample-Guided Abstraction Renement Edmund...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon
Counterexample-Guided Abstraction Refinement ? Edmund Clarke 1 , Orna Grumberg 2 , Somesh Jha 1 , Yuan Lu 1 , and Helmut Veith 1 , 3 1 Carnegie Mellon University, Pittsburgh, USA 2 Technion, Haifa, Israel 3 Vienna University of Technology, Austria Abstract. We present an automatic iterative abstraction-refinement methodology in which the initial abstract model is generated by an automatic analysis of the con- trol structures in the program to be verified.Abstract models may admit erroneous (or “spurious”) counterexamples. We devise new symbolic techniques which ana- lyze such counterexamples and refine the abstract model correspondingly. The refinement algorithm keeps the size of the abstract state space small due to the use of abstraction functions which distinguish many degrees of abstraction for each program variable. We describe an implementation of our methodology in NuSMV. Practical experiments including a large Fujitsu IP core design with ab- out 500 latches and 10000 lines of SMV code confirm the effectiveness of our approach. 1 Introduction The state explosion problem remains a major hurdle in applying model checking to large industrial designs.Abstraction is certainly the most important technique for handling this problem. In fact, it is essential for verifying designs of industrial complexity. Currently, abstraction is typically a manual process, often requiring considerable creativity. In order for model checking to be used more widely in industry, automatic techniques are needed for generating abstractions. In this paper, we describe an automatic abstraction technique for ACTL ? specifications which is based on an analysis of the structure of formulas appearing in the program ( ACTL ? is a fragment of CTL ? which only allows universal quantification over paths). In general, our technique computes an upper approximation of the original program. Thus, when a specification is true in the abstract model, it will also be true in the concrete design. However, if the specification is false in the abstract model, the counterexample may be the result of some behavior in the approximation which is not present in the original model. When this happens, it is necessary to refine the abstraction so that the behavior which caused the erroneous counterexample is eliminated. The main contribution of this paper is an efficient automatic refinement technique which uses information obtained from erroneous counterexamples. The refinement algorithm ? This research is sponsored by the Semiconductor Research Corporation (SRC) under Contract No. 97-DJ-294, the National Science Foundation (NSF) under Grant No. CCR-9505472, and the Max Kade Foundation. One of the authors is also supported by Austrian Science Fund Project N Z29-INF. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of SRC, NSF, or the United States Government.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 2
This is the end of the preview. Sign up to access the rest of the document.
  • Winter '11
  • DavidNotikin
  • Equivalence relation, model checking, abstraction function, counterexample-guided abstraction refinement, E. M. Clarke, atomic formulas

{[ snackBarMessage ]}