This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: What Went Wrong: Explaining Counterexamples Alex Groce 1 and Willem Visser 2 1 Department of Computer Science Carnegie Mellon University Pittsburgh, PA, 15213 2 RIACS/NASA Ames Research Center Moffett Field, CA 94035-1000 Abstract. One of the chief advantages of model checking is the produc- tion of counterexamples demonstrating that a system does not satisfy a specification. However, it may require a great deal of human effort to extract the essence of an error from even a detailed source-level trace of a failing run. We use an automated method for finding multiple ver- sions of an error (and similar executions that do not produce an error), and analyze these executions to produce a more succinct description of the key elements of the error. The description produced includes iden- tification of portions of the source code crucial to distinguishing failing and succeeding runs, differences in invariants between failing and non- failing runs, and information on the necessary changes in scheduling and environmental actions needed to cause successful runs to fail. 1 Introduction In model checking , algorithms are used to systematically determine whether a system satisfies a specification. One of the major advantages of model check- ing in comparison to such methods as theorem proving is the production of a counterexample that provides a detailed example of how the system violates the specification when verification fails. However, even a detailed trace of how a system violates a specification may not provide enough information to easily understand (much less remedy) the problem with the system. Model checking is particularly effective at finding subtle errors that can elude traditional testing, but consequently the errors are also difficult to understand, especially from a single error trace. Furthermore, when the model of the system is in any sense abstracted from the real implementation, simply determining whether an error is indeed a fault in the system or merely a consequence of modeling assumptions or incorrect specification can be quite difficult. We attempt to extract more information from a single counterexample pro- duced by model checking in order to facilitate understanding of errors in a system (or problems with the specification of a system). We focus in this work on finite executions demonstrating violation of safety properties (e.g. assertion violations, uncaught exceptions, and deadlocks) in Java programs. The key to this approach is to first define (and then find) multiple variations on a single counterexample (other versions of the same error). From this definition naturally arises that of a set of executions that are variations in which the error does not occur. We call the first set of executions negatives and the second set positives . Analysis of the common features of each set and the differences between the sets may yield a more useful feedback than reading (only) the original counterexample....
View Full Document
This note was uploaded on 02/24/2012 for the course CSE 503 taught by Professor Davidnotikin during the Spring '11 term at University of Washington.
- Spring '11
- Computer Science