lect-15-research-economics-governance

lect-15-research-economics-governance - 5/20/2011...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
5/20/2011 1 CSE503: SOFTWARE ENGINEERING RESEARCH APPROACHES, ECONOMICS AND GOVERNANCE David Notkin Spring 2011 Evaluation of SE research 503 11sp © UW CSE • D. Notkin What convinces you? Why? 2 Possible answers include Intuition Quantitative assessments Qualitative assessments Case studies other possible answers? 503 11sp © UW CSE • D. Notkin 3 Brooks on evaluation The first user gives you infinite utility – that is, you learn more from the first person who tries an approach than from every person thereafter In HCI, Brooks compared "narrow truths proved convincingly by statistically sound experiments, and broad 'truths', generally applicable, but supported only by possibly unrepresentative observations.‖ Grasping Reality Through Illusion -- Interactive Graphics Serving Science. Proc 1988 ACM SIGCHI 503 11sp © UW CSE • D. Notkin 4
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
5/20/2011 2 More on Brooks by Mary Shaw ―Brooks proposes to relieve the tension through a certainty-shell structure – to recognize three nested classes of results, Findings: well-established scientific truths, judged by truthfulness and rigor; Observations: reports on actual phenomena, judged by interestingness; Rules of thumb: generalizations, signed by their author but perhaps incompletely supported by data, judged by usefulness.‖ What Makes Good Research in Software Engineering? International Journal of Software Tools for Technology Transfer , 2002 503 11sp © UW CSE • D. Notkin 5 Shaw: research questions in SE 503 11sp © UW CSE • D. Notkin 6 Shaw: types of SE results 503 11sp © UW CSE • D. Notkin 7 Shaw Types of validation 503 11sp © UW CSE • D. Notkin 8
Background image of page 2
5/20/2011 3 Tichy et al. on quantitative evaluation Experimental evaluation in computer science: A quantitative study. Journal of Systems and Software 1995 Tichy, Lukowicz, Prechelt & Heinz Abstract: A survey of 400 recent research articles suggests that computer scientists publish relatively few papers with experimentally validated results. The survey includes complete volumes of several refereed computer science journals, a conference, and 50 titles drawn at random from all articles published by ACM in 1993. The journals of Optical Engineering (OE) and Neural Computation (NC) were used for comparison. . . (con‘t) 503 11sp © UW CSE • D. Notkin 9 Con‘t Of the papers in the random sample that would require experimental validation, 40% have none at all. In journals related to software engineering, this fraction is 50%. In comparison, the fraction of papers lacking quantitative evaluation in OE and NC is only 15% and 12%, respectively. Conversely, the fraction of papers that devote one fifth or more of their space to experimental validation is almost 70% for OE and NC , while it is a mere 30% for the computer science (CS) random sample and 20% for software engineering. The low ratio of validated results appears to be a serious weakness in computer
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 02/24/2012 for the course CSE 503 taught by Professor Davidnotikin during the Spring '11 term at University of Washington.

Page1 / 11

lect-15-research-economics-governance - 5/20/2011...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online