Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon
Profiling Deployed Software: Assessing Strategies and Testing Opportunities Sebastian Elbaum, Member , IEEE , and Madeline Diep Abstract —An understanding of how software is employed in the field can yield many opportunities for quality improvements. Profiling released software can provide such an understanding. However, profiling released software is difficult due to the potentially large number of deployed sites that must be profiled, the transparency requirements at a user’s site, and the remote data collection and deployment management process. Researchers have recently proposed various approaches to tap into the opportunities offered by profiling deployed systems and overcome those challenges. Initial studies have illustrated the application of these approaches and have shown their feasibility. Still, the proposed approaches, and the tradeoffs between overhead, accuracy, and potential benefits for the testing activity have been barely quantified. This paper aims to overcome those limitations. Our analysis of 1,200 user sessions on a 155 KLOC deployed system substantiates the ability of field data to support test suite improvements, assesses the efficiency of profiling techniques for released software, and the effectiveness of testing efforts that leverage profiled field data. Index Terms —Profiling, instrumentation, software deployment, testing, empirical studies. æ 1I NTRODUCTION S OFTWARE test engineers cannot predict, much less exercise, the overwhelming number of potential scenar- ios faced by their software. Instead, they allocate their limited resources based on assumptions about how the software will be employed after release. Yet, the lack of connection between in-house activities and how the soft- ware is employed in the field can lead to inaccurate assumptions, resulting in decreased software quality and reliability over the system’s lifetime. Even if estimations are initially accurate, isolation from what happens in the field leaves engineers unaware of future shifts in user behavior or variations due to new environments until too late. Approaches integrating in-house activities with field data appear capable of overcoming such limitations. These approaches must profile field data to continually assess and adapt quality assurance activities, considering each de- ployed software instance as a source of information. The increasing software pervasiveness and connectivity levels 1 of a constantly growing pool of users coupled with these approaches offers a unique opportunity to gain a better understanding of the software’s potential behavior. Early commercial efforts have attempted to harness this opportunity by including built-in reporting capabilities in deployed applications that are activated in the presence of certain failures (e.g., Software Quality Agent from Netscape [25], Traceback [17], Microsoft Windows Error Reporting API). More recent approaches, however, are designed to leverage the deployed software instances throughout their
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 2
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 02/24/2012 for the course CSE 503 taught by Professor Davidnotikin during the Spring '11 term at University of Washington.

Page1 / 16


This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online