The_Value_of_Systems_Engineering

The_Value_of_Systems_Engineering - Understanding the Value...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Understanding the Value of Systems Engineering Eric C. Honour Honourcode, Inc. 3008 Ashbury Lane Pensacola, FL 32533 Abstract. The practices of systems engineering are believed to have high value in the development of complex systems. Heuristic wisdom is that an increase in the quantity and quality of systems engineering (SE) can reduce project schedule while increasing product quality. This paper explores recent theoretical and statistical information concerning this heuristic value of SE. It explores the underlying theoretical relationships among project cost and schedule, technical value, technical size, technical complexity, and technical quality, summarizing prior work by the author. It then identifies and summarizes six prior statistical studies with conclusions that relate to the value of SE. Finally, the paper provides final results of a statistical study by the INCOSE Systems Engineering Center of Excellence (SECOE) that presents evident correlations supporting the heuristics. The results indicate that optimal SE effort is approximately 15-20% of the total project effort. Background The discipline of systems engineering (SE) has been recognized for 50 years as essential to the development of complex systems. Since its recognition in the 1950s [Goode 1957], SE has been applied to products as varied as ships, computers and software, aircraft, environmental control, urban infrastructure and automobiles [SE Applications TC 2000]. Systems engineers have been the recognized technical leaders [Hall 1993, Frank 2000] of complex project after complex project. In many ways, however, we understand less about SE than nearly any other engineering discipline. Systems engineering can rely on systems science and on many domain physics relationships to analyze product system performance. But systems engineers still struggle with the basic mathematical relationships that control the development of systems. SE today guides each system development by the use of heuristics learned by each practitioner during the personal experimentation of a career. The heuristics known by each differ; one need only view the fractured development of SE “standards” and SE certification to see how much they differ. As a result of this heuristic understanding of the discipline, it has been nearly impossible to quantify the value of SE to projects [Sheard 2000]. Yet both practitioners and managers intuitively understand that value. They typically incorporate some SE practices in every complex project. The differences in understanding, however, just as typically result in disagreement over the level and formality of the practices to include. Presciptivists create extensive standards, handbooks, and maturity models that prescribe the practices that “should” be included. Descriptivists document the practices that were “successfully” followed on given projects. In neither case, however, are the practices based on a quantified measurement of the actual value to the project. Traditional Design Traditional Design Risk SYSTEM SYSTEM DETAIL PRODUCTION DESIGN DESIGN INTEGRATION TEST “System Thinking” Design Saved Time/ Cost Time Time “System Thinking” Design Risk Figure 1. Intuitive Value of SE. Time Figure 2. Risk Reduction by SE. The intuitive understanding of the value of SE is shown in Figure 1. In traditional design, without consideration of SE concepts, the creation of a system product is focused on production, integration, and test. In a “system thinking” design, greater emphasis on the system design creates easier, more rapid integration and test. The overall result is a savings in both time and cost, with a higher quality system product. The primary impact of the systems engineering concepts is to reduce risk early, as shown in Figure 2. By reducing risk early, the problems of integration and test are prevented from occurring, thereby reducing cost and shortening schedule. The challenge in understanding the value of SE is to quantify these intuitive understandings. Field of Study Because there is wide variance in the perceptions of SE, any theoretical effort must start with a definition of terms that bounds the field of study. For this paper, “systems engineering” is taken in a broad sense that includes all efforts that apply science and technology (“engineering”) to the development of interacting combinations of elements (“systems”). Such efforts are frequently characterized as having both technical and management portions because of the interdisciplinary nature of system development teams. The breadth of skills necessary for good SE was studied well by [Frank 2000]. We take “SE management” to be the efforts that guide and control the systems engineering. There are obvious overlaps with (a) “development engineering,” the use of specific engineering disciplines to create the elements of a system, (b) “test engineering,” the application of engineering to verify and/or validate the system, and (c) “program management,” the overall control of a project. SE distinguishes itself from development engineering by the use of interdiscipline coordination and inter-element technical analyses. Test engineering when applied at the system level is taken to be a subset of SE, while test engineering applied to system elements is not. SE management is distinguished from program management in the use of technical analysis and control and in the emphasis on technical quality as opposed to financial and schedule concerns. Underlying Mathematical Theory Understanding the value of SE requires quantifying that value. Quantification requires understanding the numerical parameters that matter to SE. A useable mathematical theory that underlies SE can contribute to an understanding of the important parameters, particularly those important to SE management. There are frequently stated arguments against the feasibility of such a mathematical theory [Sheard 2000]. First, each system development is one of a kind. In presenting new challenges, each such development might invalidate the scientific basis of any prior theory. Second, projects vary widely in important parametric measures – size, schedule, and acceptable risk. This variance leads, in the few data collected previously [Mar 2002], to wide variance in the data points. Third, such a theory of necessity includes a statistical representation of the human nature of the developers, a representation that is frequently viewed with skepticism. Fourth, SE applies itself to systems that contain components developed by virtually any other engineering discipline [Honour 1999]. This highly varied application might defeat codification. Yet none of these arguments prove the impossibility of such a mathematical theory. This section summarizes the author’s work [Honour 2002] toward a mathematical theory of SE management and applies that theory to quantifying the value of SE. The intent of this summary is to lead up to a working hypothesis for the value of SE. Basic SE Values Observable Values in SE. The observable values in SE management are widely known, although there is great difficulty in defining some of them. Each system development project can be viewed as a stochastic process. At the beginning of the project, management choices are made that set the parameters for the stochastic process. Such choices include goals, process definitions, tool applications, personnel assignments and more. During the project, many factors influence the actual outcome. The resulting completed project achieves values in accordance with as-yet-unknown probability distributions. All of the observable values cited in this section may therefore be viewed as sample values from interrelated stochastic processes. Any given project provides a single sample of the values. Technical “size” (s) is an intuitive but highly elusive quantity that represents the overall size of the development effort. Some proposed measures of technical “size,” all inadequate so far, include the number of requirements, the number of function points, the number of newdevelopment items, and even (in a twist of cause-and-effect) the overall development cost. Technical complexity (x) represents another intuitive attribute of the system. Size and complexity are independent characteristics. A system of any given “size” can be made more difficult by increasing its complexity, where complexity is usually related to the degree of interaction of the system components. One measure of complexity was explored well by [Thomas & Mog 1997] and then subsequently validated on a series of NASA projects [Thomas & Mog 1998]. Technical quality (q) is yet a third intuitive and independent attribute of the system. Quality is measured by comparing the actual resulting product system with the intended objective. Component attributes of quality vary widely and are based on the perceptions of the stakeholders, thereby resulting in what appears to be subjective measurement. One measure of technical quality was proposed by the author [Honour 2001] in the form of value against a preagreed Objective Function. Technical Value (v) recognizes the basic trade-off among the three technical attributes above. For given duration, cost and risk, the three technical attributes appear to have inverse relationships. Size can be increased only at the expense of complexity and/or quality. Likewise, complexity can be increased if size and/or quality reduce, and the same is true of quality against size and complexity. Given appropriate selections of quantification, the inverse trade-off can be represented as: v=s*x*q (1) Project duration (d) is an attribute of the system development that is commonly used for management tracking and control. Duration is well understood, with extensive software tools for planning and scheduling projects. For our purposes, we are concerned with the overall development duration from concept through validation of the first product(s). This duration may include activities such as operational analysis, requirements definition, system design, developmental engineering, prototyping, first article(s) production, verification, and validation. Project cost (c) is a second attribute of the system development that is also commonly used for management tracking and control. As with duration, project cost is well understood. The scope for project cost, as with duration, is the overall development cost from concept through validation of first product(s). Risk (r) is a third attribute of the system development. Risk is defined in the literature in many ways. In its basic form, risk represents variability in the stochastic processes for value, duration, and cost. Risk can be measured in several ways. We talk of risk applied to technical parameters, to schedule, and to cost. Most current risk definitions focus on cost, with the assumption that technical and schedule risks can be translated to cost [e.g. Langenberg 1999]. As an attribute of the overall project, a single value of project risk was proposed by the author [Honour 2001]. For this paper, we recognize risk to be a second-order measure of the stochastic variation in project cost. Somewhat arbitrarily, we select risk to be the single-ended 90% confidence level of cost overrun: r = cr – E(c), such that P(c < cr) = .90 (2) Systems engineering effort (SEE) is the effort expended during the project to perform effective systems engineering tasks. It is the primary independent variable in our heuristic relationships. In other words, SEE is the primary variable that is selectable and controllable during a system development. Other values usually occur by selecting SEE. SEE must take into account the quality of the work performed. A group that performs systems engineering tasks poorly provides little benefit to a project. We therefore define SEE as: SEE = SE Quality * SE Cost / Project Cost (3) In this definition, SEE can be expressed as an effective percent of the total project cost. SE Quality (SEQ) is difficult to measure, but may be quantified subjectively by the project participants. It would be desirable to create a more objective measure. In the prior work [Honour 2002], the author explored the heuristic relationships Typical Operating Region among the basic SE values by performing two-point end-value analysis of each pairwise relationship. The heuristic relationships can be seen in the prior work. Among the heuristic relationships is the E(V) for SE Quality = 0% primary hypothesis for the value of systems engineering. That hypothesis is shown in SE SE Effort as % of total project 0 100 Figure 3. The thin lines represent the Figure 3. Seeking Optimum Level of SE achievable value for different levels of SEQ. Effort Within Projects. The lower thin line is the value obtainable if the SE effort that is extracted from the project performs no effective SE, i.e. reduction in effective project budget without any systems engineering worth. The upper thin line is the value obtainable for application of “best” systems engineering. The actual relationship transitions from the lower line to the upper line as SEE is increased, because SE tasks cannot be fully effective until enough budget is allocated to them. The relationship of value to SEE therefore starts at non-zero (a project without SE can still achieve some value), grows to a maximum, then diminishes to zero at SEE = 100% (all project effort is assigned to SE, so no system is produced). The rapid upward trend in the resulting curve for lower values of SEE corresponds to expectations of many systems engineers, that greater application of systems engineering improves the value of a project. Most projects appear to operate somewhere within this region, leading to a widespread occurrence of this common expectation. (See the reports below for data that supports this statement.) VALUE E(V) for SE Quality = 100% Value of SE Hypothesis Past Research This section summarizes six prior works with conclusions that apply to the value of SE. Project Definition – NASA Werner Gruhl of the NASA Comptroller’s office presented results [Gruhl 1992] that relate project quality metrics with a form of systems engineering effort (Figure 4). This data was developed within NASA in the late 1980’s for 32 major projects over the 1970s and 1980s. The NASA data is compares project cost overrun with the amount of the project spent during phases A and B of the NASA five-phase process (called by Gruhl the “definition percent”). The data shows that expending greater funds in the project definition results in significantly less cost overrun during project development. Most projects used less than 10% of funds for project definition; most projects had cost overruns well in excess of 40%. The trend line on Gruhl’s data seems to show an optimum project definition fraction of about 15%. Total Total Program Overrun 32 NASA Programs 200 180 160 Definition $ Definition Percent = ---------------------------------Target + Definition$ Actual + Definition$ Program Overrun = ---------------------------------Target + Definition$ GRO76 OMV TDRSS GALL IRAS HST TETH GOES I-M MARS CEN LAND76 ACT MAG CHA.REC. Program Overrun 140 120 100 80 60 40 20 0 0 EDO ERB77 STS LAND78 GRO82 SEASAT UARS DE SMM ERB88 VOY COBE EUVE/EP R2 = 0.5206 IUE ISEE HEAO ULYS PIONVEN 5 10 15 20 Definition Percent of Total Estimate Figure 4. NASA Data on Impact of Front End Project Definition Effort. The NASA data, however, does not directly apply to systems engineering. In Gruhl’s research, the independent variable is the percent of funding spent during NASA Phases A and B, the project definition phases. Figure 5 shows the difference between this and true systems engineering effort. It is apparent from this difference that the relationship shown in the NASA data only loosely supports a hypothesis related to systems engineering. Total Project Effort Development Effort NASA Definition Effort Systems Engineering Effort Time Time Figure 5. Definition Effort is not Equal to Systems Engineering Effort. Boundary Management Study A statistical research project in the late 1980s [Ancona 1990] studied the use of time in engineering projects. The authors gathered data from 45 technology product development teams. Data included observation and tracking of the types of tasks performed by all project members throughout the projects. Secondary data included the degree of success in terms of product quality and marketability. Of the projects studied, 41 produced products that were later successfully marketed. The remaining four projects failed to produce a viable product. One primary conclusion of the research was that a significant portion of the project time was spent working at the team boundaries. Project time was divided as: • Boundary management 14% • Work within team 38% • Individual work 48% Boundary management included work that was typically done by a few individuals rather than by all members of the team. The research also studied how these classes changed over the life of the project from creation through development through diffusion. Discovered classes of boundary management were • Ambassador - Buffering, building support, reporting, strategy • Task Coordinator - Lateral group coordination, info transfer, planning, negotiating • Scout - Obtain possibilities from outside - interface with marketing • Guard - Withhold information, prevent disclosure More important to the value of systems engineering, the research also concluded statistically that high-performing teams did more boundary management than low-performing teams. This relates to systems engineering because many of the boundary management tasks are those that are commonly performed as part of SE management. A secondary discovery of the project was that internal team dynamics (goals, processes, individual satisfaction) did not correlate with performance. This conclusion seems to be contrary to the widely-held belief that defining good processes will create a good project. Large Engineering Projects Study A recent international research project led Percent of Projects Meeting: by Massachusetts Institute of Technology (MIT) studied the strategic management of Cost Targets large engineering projects (LEP) [Miller 2000]. 82% 82% The project reviewed the entire strategic history of 60 worldwide LEPs that included the Schedule Targets development of infrastructure systems such as 72% dams, power plants, road structures, and Objective Targets national information networks. The focus of 45% 18% 37% the project was on the strategic management Failed! rather than technical management. The project used both subjective and objective measures, Figure 6. Many Large Engineering including project goals, financial metrics and Projects Fail to Meet Objectives interviews with participants. The statistical results of the LEPs are shown in Figure 6. Cost and schedule targets were often not met, but technical objective targets were only met in 45% of the 60 projects. Fully 37% of the projects completely failed to meet objectives, while another 18% met only some objectives. Three of the many findings appear to have significance to the value of SE: • The most important determinant in success was a coherent, well-developed organizational structure; in other words, a structure of leadership creates greater success. • Technical difficulties, social disturbance, size were not statistically linked to performance; all projects had turbulent events. • Technical excellence could not save a socially unacceptable project, therefore technical process definition is important but not sufficient. As with the boundary management study, this last finding appears contrary to the widelyheld belief in the efficacy of process definitions. Both of these studies (Boundary Management, LEPs) seem to indicate that technical leadership is more important than the processes used. Impact of Systems Engineering on Quality and Schedule A unique opportunity occurred at Boeing as reported by [Frantz 1995], in which three roughly similar systems were built at the same time using different levels of systems engineering. The three systems were Universal Holding Fixtures (UHF), used for manipulating large assemblies during the manufacture of airplanes. Each UHF was of a size on the order of 10’ x 40’, with accuracy on the order of thousands of an inch. The three varied in their complexity, with differences in the numbers and types of sensors and interfaces. The three projects also varied in their use of explicit SE practices. In general, the more complex UHF also used more rigorous SE practices. Some differences in process, for example, included the approach to stating and managing requirements, the approach to subcontract technical control, the types of design reviews, the integration methods, and the form of acceptance testing. The primary differences noted in the results Overall Development Time (weeks) were in the subjective quality of work and the development time. Even in the face of greater complexity, the study showed that the use of UHF3 more rigorous SE practices reduced the UHF2 durations (a) from requirements to subcontract UHF1 Request For Proposal (RFP), (b) from design to 0 50 100 production, and (c) overall development time. Figure 7 shows the significant reduction in overall development time. It should be noted Figure 7. Better SE Shortens Schedule that UHF3 was the most complex system and UHF1 the least complex system. Even though it was the most complex system, UHF3 (with better SE) completed in less than ½ the time of UHF1. Systems Engineering Effectiveness IBM Commercial Products division recently implemented new SE processes in their development of commercial software. While performing this implementation, they tracked the effectiveness of the change through metrics of productivity. As reported by [Barker 2003], productivity metrics existed prior to the implementation that were used in cost estimation. These metrics were based on the cost per arbitrary “point” assigned as a part of system architecting. (The definition of “point” is deemed to be proprietary.) The number of “points,” once assigned, became the basis for costing of project management, business management, systems engineering, system integration, and delivery into production. During the SE implementation, the actual costs of eight projects were 1,406 0 18,191 12,934 Project 1 2000 tracked against the original 1,962 0 2,400 1,223 Project 2 2000 estimates of “points.” Three 1,136 9.2 11,596 10,209 Project 3 2001 projects used prior “non-SE” 1,179 0 10,266 8,707 Project 4 2001 methods, while the remaining 1,090 10.7 5,099 4,678 Project 5 2001 five used the new SE methods. 980 14.4 5,626 5,743 Project 6 2002 In the reported analysis, the 695 10.2 10,026 14,417 Project 7 2002 preliminary data indicates that 1,739 16.0 1,600 929 Project 8 2002 the use of SE processes improves project productivity Figure 8. Implementation of SE Processes Resulted in when effectively combined Statistically Significant Cost Decrease with the project management and test processes. Cost per point for the prior projects averaged $1350, while cost per point for the projects using SE processes averaged $944. Year Project “Points” Cost ($K) SE Costs (%) $/ Point Impact of Systems Engineering on Complex Systems Another recent study was reported by [Kludze 2004], showing results of a survey on the impact of SE as 50 perceived by NASA employees and by 40 INCOSE members. The survey 30 contained 40 questions related to demographics, cost, value, schedule, 20 risk, and other general effects. 10 Aggressive pursuit of responses 0 generated 379 valid responses from a No Impact Minimal Impact Moderate Significant Extreme impact sample of 900 surveys sent out. Impact Impact Respondents were 36% from within NASA INCOSE NASA and 64% from INCOSE Figure 9. Overall Impact of SE membership. NASA respondents were approximately equally distributed among systems engineers, program managers, and others, while INCOSE respondents were predominately systems engineers. While most of the survey relates in some ways to the value of systems engineering, three primary results stand out. First, respondents were asked to evaluate the overall impact of systems engineering. The results, shown in Figure 9, indicate that the respondents believed that systems engineering has a moderate to significant impact on complex systems projects. It is noted that the response from the INCOSE group is considerably more positive than from the NASA group. 60 52 Percent of Respondents 39 41 23 18 5 7 8 8 Second, respondents were 60 specifically asked to evaluate the impact of SE on cost of the complex 50 systems projects. The results are 40 shown in Figure 10, in which it is clear 30 that respondents believed SE to have 20 good to excellent impact on cost. 10 Again, it is noted that the INCOSE group is more positive than the NASA 0 Very Poor Poor Fair Good Excellent group. NASA INCOSE Third, respondents were asked to indicate the percent of their most Figure 10. Cost Benefits of Systems Engineering recent project cost that was expended on SE, using aggregated brackets of 05%, 6-10%, 11-15%, and 16% or 35 more. Figure 11 shows the result. As 30 expected, the respondents believed 25 20 that their projects most often spent 15 between 6-10% on SE, with few 10 projects spending more than 10%. It 5 appears that INCOSE respondents 0 worked on projects that spent 0-5% 6-10% 11-15% 16% & Above proportionately more on SE than in NASA INCOSE Combined NASA. There is, however, an anomaly in this data that is represented by the bimodal characteristic of the Figure 11. Percent of Total Project Cost Spent responses. Many respondents on Systems Engineering indicated that their projects spent 16% or above. It is believed that this anomaly occurs because the respondents interpreted “project” to include such projects as a system design effort, in which most of the project is spent on SE. Percent of Respondents 48 47 37 28 20 11 1 1 3 4 Percent of Respondents Value of Systems Engineering Study In March 2001, the Systems Engineering Center of Excellence (SECOE), a subsidiary research arm of the International Council on Systems Engineering (INCOSE), initiated project 01-03 to collect and analyze data that would quantify the value of systems engineering. The original hypothesis for the project is similar to that presented above in Figure 3. The INCOSE Board of Directors supported the project with seed grant money to leverage other sources. Interim results of the continuing project have been reported in [Mar 2002]. This section provides final data on the survey phase of the project. Data Submission The original data submission form was created for total project data as well as phase-byphase reporting for data. No submissions were received for phase-by-phase information. The form for total project data included • Planned & actual cost • Planned & actual duration • Systems engineering (SE) cost 8 • Systems engineering quality 6 • Objective success • Comparative success 4 Each of the parameters was defined, and 2 these definitions were on the submission form to guide respondents. A brief definition of 0 0 5 10 15 20 25 terms are: SE Cost as % of Actual Cost Costs (planned/actual) – project costs up to delivery of first article, not including Figure 12. Histogram of Raw Submissions production costs by SE Cost % of Total Project Duration (planned/actual) – schedule up to delivery of first article SE Costs – actual costs of performing 10 traditional SE tasks, no matter who performed 8 them. For this project, “traditional SE tasks” 6 are viewed with the broad definitions of [Frank 2000]. The form included a list of 4 example SE tasks including “…technical 2 management and coordination, mission and/or 0 need analysis, system architecting, system0 5 10 15 20 25 level technical analysis, requirements SE Effort % = SE Quality * SE Cost/Actual Cost management, risk management, and other tasks associated with these.” SE Quality – subjective evaluation using Figure 13. Histogram of Submissions by SE a 0-10 scale where 0 represents SE of no Effort (as % of Total Project) value, 5 indicates a normal SE effort, and 10 is unexcelled, world class SE Objective success – subjective evaluation using a scale where 0 indicates no objectives met, 1.0 indicates all objectives met, and >1.0 indicates exceeding the objectives. This subjective measure is intended to be an approximation of the “Objective Function” based technical quality of [Honour 2001]. Comparative success – subjective evaluation using a 0 to 10 scale where 0 indicates project failure, 5 indicates success equal to other projects, and 10 indicates unexcelled, world class success. This subjective measure is intended to be an alternate measure of the project success. 10 Respondent Data. Data points submitted can be seen in Figures 12 and 13 for the 44 respondent projects. Figure 12 shows the percentage of SE cost as reported by the respondents, ranging from less than 1% to 26% with a mode at about 4%. Figure 9 shows the effective percentage of SE cost in terms of our defined SEE, ranging from less than 1% to 26% with one primary mode at 1% and a secondary, much smaller, mode at 8%. We note that the demographic in Figure 12 seems to corroborate the survey data obtained by Kludze. Most projects appear to spend on the order of 5% of the project cost on SE tasks, with considerably fewer projects spending over 10%. Number of Projects Number of Projects Analysis – Cost and Schedule The results of the primary analysis concerning cost and schedule compliance are shown in Figures 14 and 15. Figure 14 shows the data for actual cost (AC) / planned cost (PC), while Figure 15 shows the data for actual schedule (AS) / planned schedule (PS). Both charts show (a) the best-fit statistical mean for the values using a least-sum-of-squares fit (solid line), and (b) 90% assurance values (1.6σ) assuming a Normal distribution at each vertical value of SEE (dashed lines). In both cases, the best-fit curve for the statistical mean appeared to be a secondorder polynomial with minimum between 15-20% SEE. The actual location of the minimum has little confidence because so few projects reported values of SEE above 10%. Covariance correlations for the curve-fitting were considerably better when using SEE than when using the raw SE Cost %, indicating that the quality of the SE is an important factor in the mathematical quantification of SE value. 3.0 3.0 2.2 Actual/Planned Schedule 0% 4% 8% 12% 16% 20% 24% 28% 2.6 2.6 Actual/Planned Cost 2.2 1.8 1.8 1.4 1.4 1.0 1.0 0% 0.6 4% 8% 12% 16% 20% 24% 0.6 SE Effort = SE Quality * SE Cost/Actual Cost SE Effort = SE Quality * SE Cost/Actual Cost Figure 14. Cost performance as a function of SE effort Figure 15. Schedule performance as a function of SE effort. These results correlate well with the past research reported above. The NASA research data shows an optimum of about 15% based on definition percent, corresponding to the 15% SEE shown in Figures 14 and 15. The Frantz data shows a significant reduction in schedule based on better application of SE, similar to Figure 15. The LEP data shows better cost control than schedule control, which trend is also evident by comparing the forms of Figures 14 and 15. Finally, the Barker data shows significant reduction in cost based on better application of SE, similar to Figure 14. Analysis – Project Size A secondary analysis correlated cost and schedule compliance with project size, where project size was approximated by using the total actual cost. Figure 16 shows the overall trend in a logarithmic plot of project size from $1 million to $10 billion. It is an interesting phenomenon that projects at both ends of this range appear to be better cost-controlled than projects in the $10 million to $100 million range. Figures 17 and 18 show the slight trend in cost and schedule for projects <$100M. Figure 12 shows the relationship of actual cost (AC) / planned cost (PC) to project size, while Figure 13 shows the relationship of actual schedule (AS) / planned schedule (PS) to project size. In both cases, the smallest projects appear to have better cost/schedule control than do the mid-size (~$100M) projects. 3.5 3 2.5 2 1.5 1 0.5 0 1 10 100 Actual Cost ($M) - Logarithm ic scale Actual Cost / Planned Cost 1000 10000 Figure 16. Cost performance as a function of project size 2.2 3 Actual/Planned Schedule Actual/Planned Cost 2.6 2.2 1.8 1.4 1 0 0.6 20 40 60 80 1.8 1.4 1 0 0.6 20 40 60 80 Actual Cost Actual Cost Figure 17. Cost performance as a function of project size (projects <$100M) Figure 18. Schedule performance as a function of project size (projects <$100M). Test of Hypothesis In the original hypothesis of Figure 3, the value of SE is expected to rise for low values of SEE, reach a maximum, and then fall away. Development Quality (DQ) can be defined as a function of technical product quality, project cost, project schedule, technical “size,” technical complexity, and risk. The few data points gathered do not support exploration of all these factors, but a tentative approach to DQ can be calculated as the inverse average of the cost and schedule ratios: DQ = 1 / ( ½ * (AC/PC + AS/PS) ) (4) Where AC is actual cost, PC is planned cost, AS is actual schedule, and PS is planned schedule. If a project completes on-cost and on-schedule, the value of DQ is 1. Projects that overrun cost and schedule have values of DQ < 1. 1.2 10.0 9.0 Development Quality (Cost/Schedule Based) 1.0 Comparative Success 8.0 7.0 6.0 5.0 4.0 3.0 2.0 1.0 0.8 0.6 0.4 0% 4% 8% 12% 16% 20% 24% 0.0 0.0 0% 4% 8% 12% 16% 20% 24% SE Effort SE Effort Figure 19. Test of original hypothesis Figure 20. Subjective quality as reported Figure 19 shows this rudimentary DQ plotted against SEE. There is a trend that appears to follow the pattern of the original hypothesis. However, because this approach does not yet include the factors of product quality, technical size, complexity, or risk, there is significant variability around the expected trend. Variability (scatter) also occurs due to other project factors beyond SE, such as political pressures and program management quality. As before, we note that most of the projects submitted appear to operate well below the apparent optimum. The data submitted for objective success provided no apparent correlation with SEE. As a second independent test of the original hypothesis, Figure 20 plots the comparative success values as reported subjectively by respondents. This shows that respondents perceived significantly lower success with projects that had low SEE than with projects with high SEE. The shape of the comparative success approximates the original hypothesis, indicating that this subjective value might also be a rough measure of the hypothesized DQ. Known Limitations The data available for analysis in this project present several important limitations to the results. Any use of the values herein should be tempered by these limitations. The data are self-reported and largely subjective, without checking. Those responding to the data requests may be assumed to be senior engineering personnel by nature of their association with INCOSE; such personnel can be expected to have the kind of data requested. Nonetheless, there have been no quality controls on the submission of data. Perceptive influences likely color the data. The underlying hypotheses for this project are well-known and widely accepted. Because of the wide acceptance, respondents can be expected to include a subconscious bias toward supporting the hypotheses. This single fact might have caused much of the correlation observed. Systems engineering effort is also self-reported based on the respondents’ individual perceptions of systems engineering. There is no certainty that different respondents had the same perceptions about the scope of work to be included within SEE. Respondents come from the population of INCOSE members and others with whom the author had contact. This limits the scope of projects included within the data. Conclusions Under the limitations presented, however, some interim conclusions can be made from this data. SE effort improves development quality. The data presented shows that increasing the level and quality of systems engineering has positive effect on cost compliance, schedule compliance, and subjective quality of the projects. In this, the original project hypothesis is supported by the data received. Optimum SE effort is 15-20%. While there are few data points in the region of optimum SE effort, the trend lines appear to reach maximum in the range of 15-20%. This same optimum value appears in the analyses of cost compliance, schedule compliance, and subjective quality. This data is contrary to the usual project SE budgets of 3% - 8%. This optimum value is further supported by the prior works by NASA and by Kludze. Quality of the SE effort matters. There is significant scattering of the data due to many factors, some of which are beyond the scope of SE. Nonetheless, correlation of the data is better when the subjective factor of SE Quality is included. This corroborates the widely-held assumption that lower quality SE reduces its effectiveness. Future Work The data analysis of the SECOE project suggests that there is a strong case to be made for a quantitative relationship between systems engineering investment and the quality of project performance. Far more data is needed, however, to quantify and parameterize the relationships. It is hoped that this project report will stimulate organizations to share their data on systems engineering effectiveness to support work such this research project. These conclusions are further supported by the correlation with the six other projects reported herein. A significant future benefit of this continuing work is in the estimation of systems engineering effort. If the original hypothesis can be proven, quantified, and parameterized, then future systems project will be able to select a level of systems engineering investment that is appropriately optimum for the desired product quality and risk. SECOE is continuing work in collaboration with the University of Southern California efforts to create a COSYSMO systems engineering costing model. References Ancona and Caldwell, “Boundary Management,” Research Technology Management, 1990. Barker, B. “Determining Systems Engineering Effectiveness,” Conference on Systems Integration, Stevens Institute of Technology, Hoboken, NJ, 2003 Frank, M. “Cognitive and Personality Characteristics of Successful Systems Engineers,” INCOSE International Symposium, Minneapolis, MN, 2000 Frantz, W.F. “The Impact of Systems Engineering on Quality and Schedule – Empirical Evidence,” NCOSE International Symposium, St. Louis, MO 1995 Goode, H. H. and R. E. Machol (1957). System Engineering: An Introduction to the Design of Large-Scale Systems, McGraw-Hill, New York Gruhl, W. “Lessons Learned, Cost/Schedule Assessment Guide,” Internal presentation, NASA Comptroller’s office, 1992. Hall, M. N. "Reducing Longterm System Cost by Expanding the Role of the Systems Engineer," Proceedings of the 1993 International Symposium on Technology and Society, IEEE, Washington DC, 1993, pp. 151-155. Honour, E.C., “Characteristics of Engineering Disciplines,” Proceedings of the 13th International Conference on Systems Engineering, University of Nevada, Las Vegas, 1999. Honour, E.C., “Optimising the Value of Systems Engineering,” INCOSE International Symposium, Melbourne, Australia, 2001. Honour, E.C. “Quantitative Relationships in Effective Systems Engineering,” INCOSE_IL Conference, ILTAM, Haifa, Israel, 2002. Honour, E.C. “Toward Understanding the Value of Systems Engineering,” Conference on Systems Integration, Stevens Institute of Technology, Hoboken, NJ, 2003 Kludze, A.K., “The Impact of Systems Engineering on Complex Systems,” Conference on Systems Engineering Research, University of Southern California, Los Angeles, CA 2004. Langenberg, I. and F. de Wit, “Managing the Right Thing: Risk Management,” INCOSE International Symposium, Brighton, UK, 1999. Mar, B.L. and E.C. Honour, “Value of Systems Engineering – SECOE Project Report,” INCOSE International Symposium, Las Vegas, NV, 2002. Miller, R., S. Floricel, D.R. Lessard, The Strategic Management of Large Engineering Projects, MIT Press, 2000. SE Applications TC, Systems Engineering Application Profiles, Version 3.0, INCOSE 2000 Sheard, S., “The Shangri-La of ROI,” INCOSE International Symposium, Minneapolis, MN, 2000. Thomas, L.D. and R.A. Mog, “A Quantitative Metric of System Development Complexity,” INCOSE International Symposium, Los Angeles, CA, 1997. Thomas, L.D. and R.A. Mog, “A Quantitative Metric of System Development Complexity: Validation Results,” INCOSE International Symposium, Vancouver, BC, 1998. Biography Eric Honour was the 1997 INCOSE President. He has a BSSE from the US Naval Academy and MSEE from the US Naval Postgraduate School, with 35 years of systems experience. He was a naval officer for nine years, using electronic systems in P-3 anti-submarine warfare aircraft. He has been a systems engineer, engineering manager, and program manager with Harris, E-Systems, and Link. He has taught engineering at USNA, at community colleges, and in continuing education courses. He was the founding President of the Space Coast Chapter of INCOSE. He was the founding chair of the INCOSE Technical Board. He was selected in 2000 for Who’s Who in Science and Technology. Mr. Honour provides technical management support and systems engineering training as President of Honourcode, Inc., and is the director of the INCOSE Systems Engineering Center of Excellence. ...
View Full Document

This note was uploaded on 11/22/2009 for the course ISE 627 taught by Professor Componation during the Fall '08 term at University of Alabama - Huntsville.

Ask a homework question - tutors are online