alternative_system_design

alternative_system_design - SYSC4005/5001 Winter 2009...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
Professor John Lambadaris SYSC4005/5001 Winter 2009 1 Comparison and Evaluation of Alternative System Designs Winter 2009 Slides are based on the texts: -Discrete Event System Simulation, by Banks et al -Discrete Event Simulation: A first Course, by Leemis and Park
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Professor John Lambadaris SYSC4005/5001 Winter 2009 2 Purpose ± Purpose: comparison of alternative system designs. ± Approach: discuss a few of many statistical methods that can be used to compare two or more system designs. ± Statistical analysis is needed to discover whether observed differences are due to: ² Differences in design or, ² The random fluctuation inherent in the models.
Background image of page 2
Professor John Lambadaris SYSC4005/5001 Winter 2009 3 Outline ± For two-system comparisons: ² Independent sampling. ² Correlated sampling (common random numbers). ± For multiple system comparisons: ² Bonferroni approach: confidence-interval estimation, screening, and selecting the best. ± Metamodels
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Professor John Lambadaris SYSC4005/5001 Winter 2009 4 Comparison of Two System Designs ± Goal: compare two possible configurations of a system ² e.g., evaluation of two routing or scheduling schemes in a computer communication network! ± Approach: the method of replications is used to analyze the output data. ± The mean performance measure for system i is denoted by θ i ( i = 1,2 ). ± To obtain point and interval estimates for the difference in mean performance, namely 1 2 .
Background image of page 4
Professor John Lambadaris SYSC4005/5001 Winter 2009 5 Comparison of Two System Designs ± Vehicle-safety inspection example: ² The station performs 3 jobs: (1) brake check, (2) headlight check, and (3) steering check. ² Vehicles arrival: Poisson with rate = 9.5 /hour. ² Present system: ± Three stalls in parallel (one attendant makes all 3 inspections at each stall). ± Service times for the 3 jobs: normally distributed with means 6.5, 6.0 and 5.5 minutes, respectively. ² Alternative system: ± Each attendant specializes in a single task, each vehicle will pass through three work stations in series ± Mean service times for each job decreases by 10% ( 5.85, 5.4 , and 4.95 minutes). ² Performance measure: mean response time per vehicle (total time from vehicle arrival to its departure).
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Professor John Lambadaris SYSC4005/5001 Winter 2009 6 Comparison of Two System Designs ± From replication r of system i , the simulation analyst obtains an estimate Y ir of the mean performance measure θ i . ± Assuming that the estimators Y ir are unbiased: 1 = E(Y 1r ), r = 1, … , R 1 ; 2 = E(Y 2r ), r = 1, … , R 2 ± Goal: compute a confidence interval for 1 2 to compare the two system designs ± Confidence interval for 1 2 (c.i.): ² If c.i. is totally to the left of 0 , strong evidence for the hypothesis that 1 2 < 0 ( 1 < 2 ). ² If c.i. is totally to the right of 0 , strong evidence for the hypothesis that 1 2 > 0 ( 1 > 2 ).
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 37

alternative_system_design - SYSC4005/5001 Winter 2009...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online