cnotes7 - Software Metrics 1 Lord Kelvin a physicist 2...

Info iconThis preview shows pages 1–9. Sign up to view the full content.

View Full Document Right Arrow Icon
Software Metrics 1. Lord Kelvin, a physicist 2. George Miller, a psychologist
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Software Metrics Product vs. process Most metrics are indirect: No way to measure property directly or Final product does not yet exist For predicting, need a model of relationship of predicted variable with other measurable variables. Three assumptions (Kitchenham) 1. We can accurately measure some property of software or process. 2. A relationship exists between what we can measure and what we want to know. 3. This relationship is understood, has been validated, and can be expressed in terms of a formula or model. Few metrics have been demonstrated to be predictable or related to product or process attributes.
Background image of page 2
Software Metrics (2) . Code Static Dynamic Programmer productivity Design Testing Maintainability Management Cost Duration, time Staffing
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Code Metrics Estimate number of bugs left in code. From static analysis of code From dynamic execution Estimate future failure times: operational reliability
Background image of page 4
Static Analysis of Code Halstead’s Software Physics or Software Science n1 = no. of distinct operators in program n2 = no. of distinct operands in program N1 = total number of operator occurrences N2 = total number of operand occurrences Program Length: N = N1 + N2 Program volume: V = N log 2 (n1 + n2) (represents the volume of information (in bits) necessary to specify a program.) Specification abstraction level: L = (2 * n2) / (n1 * N2) Program Effort: E = (n1 + N2 * (N1 + N2) * log 2 (n1 + n2)) / (2 * n2) (interpreted as number of mental discrimination required to implement the program.)
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
McCabe’s Cyclomatic Complexity Hypothesis: Difficulty of understanding a program is largely determined by complexity of control flow graph. Cyclomatic number V of a connected graph G is the number of linearly independent paths in the graph or number of regions in a planar graph. R1 R2 R4 R5 R3 Claimed to be a measure of testing diffiiculty and reliability of modules. McCabe recommends maximum V(G) of 10.
Background image of page 6
Static Analysis of Code (Problems) Doesn’t change as program changes. High correlation with program size. No real intuitive reason for many of metrics. Ignores many factors: e.g., computing environment, application area, particular algorithms implemented, characteristics of users, ability of programmers,. Very easy to get around. Programmers may introduce more obscure complexity in order to minimize properties measured by particular complexity metric.
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Static Analysis of Code (Problems con’t) Size is best predictor of inherent faults remaining at start of program test. One study has shown that besides size, 3 significant additional factors: 1. Specification change activity, measured in pages of specification changes per k lines of code. 2.
Background image of page 8
Image of page 9
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 29

cnotes7 - Software Metrics 1 Lord Kelvin a physicist 2...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online