testgen-ase2006 - An Empirical Comparison of Automated...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: An Empirical Comparison of Automated Generation and Classification Techniques for Object-Oriented Unit Testing Marcelo dAmorim 1 , Carlos Pacheco 2 , Tao Xie 3 , Darko Marinov 1 , Michael D. Ernst 2 1 Department of Computer Science, University of Illinois, Urbana-Champaign, IL, USA 2 Computer Science and Artificial Intelligence Lab, MIT, Cambridge, MA, USA 3 Computer Science Department, North Carolina State University, Raleigh, NC, USA { damorim, marinov } @cs.uiuc.edu, { cpacheco, mernst } @csail.mit.edu, xie@csc.ncsu.edu Abstract Testing involves two major activities: generating test inputs and determining whether they reveal faults. Auto- mated test generation techniques include random gener- ation and symbolic execution. Automated test classifica- tion techniques include ones based on uncaught exceptions and violations of operational models inferred from manu- ally provided tests. Previous research on unit testing for object-oriented programs developed three pairs of these techniques: model-based random testing, exception-based random testing, and exception-based symbolic testing. We develop a novel pair, model-based symbolic testing. We also empirically compare all four pairs of these generation and classification techniques. The results show that the pairs are complementary (i.e., reveal faults differently), with their respective strengths and weaknesses. 1. Introduction Unit testing checks the correctness of program units (components) in isolation. It is an important part of soft- ware development; if the units are incorrect, it is hard to build a correct system from them. Unit testing is becoming a common and substantial part of the software development practice: at Microsoft, for example, 79% of developers use unit tests [24], and the code for unit tests is often larger than the project code under test [23]. Creation of a test suite requires test input generation, which generates unit test inputs, and test classification, which determines whether a test passed or failed. (This pa- per often uses the term test for test input, and it uses classification for determining the correctness of an exe- cution, which is sometimes called the oracle problem.) Pro- grammers can test manually, using their intuition or experi- ence to make up test inputs and using either informal rea- soning or experimentation to determine the proper output for each input. One alternative is to use formal specifi- cations, which can aid both test generation and classifica- tion [4]. Such specifications are time-consuming and diffi- cult to produce manually and often do not exist in practice. This research focuses on testing techniques that do not re- quire a priori specifications....
View Full Document

This note was uploaded on 02/24/2012 for the course CSE 503 taught by Professor Davidnotikin during the Spring '11 term at University of Washington.

Page1 / 10

testgen-ase2006 - An Empirical Comparison of Automated...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online