MIE360 16 Continuous Input Distn

# MIE360 16 Continuous Input Distn - MIE360 Computer Modeling...

This preview shows pages 1–4. Sign up to view the full content.

MIE360 Computer Modeling and Simulation Lecture Notes Daniel Frances © 2010 1 Lecture 16 – Fitting Continuous Input Distributions This material will largely duplicate the analysis for the discrete case, with the exceptions as highlighted in gray. . Discrete Type of random variable Select Next distribution Use Maximum Likelihood Estimation to determine the distribution parameters that best fit the data Perform a Chi-square Goodness of Fit test to determine the Confidence of the test. More Distributions? Is there an acceptable distribution? Produce Graphs and Reports Produce an Empirical Distribution Select Next distribution Use Maximum Likelihood Estimation to determine the distribution parameters that best fit the data Perform a Chi-Square OR Kolmogorov–Smirnov OR Andersen-Darling Goodness of Fit test to determine the Confidence of the test. More Distributions? Is there an acceptable distribution? Produce Graphs and Reports (including Q-Q) Produce an Empirical Distribution Yes No Continuous Yes No Yes No Yes No

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
MIE360 Computer Modeling and Simulation Lecture Notes Daniel Frances © 2010 2 Maximum Likelihood Estimation Assume for a moment that we are given a set of data X 1 , X 2 , …., X n from the data it appears that it takes any real value we need to fit a distribution as in the discrete case we draw a histogram by comparing with the palate it looks like the exponential distribution Pdf(x)= λ e - λ x What value of λ should we assign? Suppose we superimpose the histogram with “real” exponential distributions As we change p, some bars come closer, others go further apart. What criterion should we use? o minimize sum of squares of differences between Pdf(x) and (f x /n)? o minimize sum of absolute differences? o maximize likelihood of sample? The first two are easy to understand but hard to compute The last one – maximum likelihood is harder to grasp, but relatively easier to compute! And we will not event try for the continuous distribution But the mechanics is the same Maximum Likelihood Estimators Likelihood = L ( λ ) = Pr (X 1 | λ ). Pr (X 2 | λ )…Pr (X n | λ ) = = i n X n X X X e e e e λ ... 2 1
MIE360 Computer Modeling and Simulation Lecture Notes Daniel Frances © 2010 3 Technically it is easier to deal with the logarithm of the Likelihood function = + = = = i X n X n X n e e L i i λ ) ln( ] ln[ ) ln( ] ln[ )] ( ln[ ) ( x x X n i 1 0 1 0 ) ( ' = = = = Therefore for the exponential distribution’s MLE estimate of λ is x 1 . Thus for example if your sample was 3.1, 0.6, 2.5, 0.3, 4.1, 1.2, 1.7, 5.1, 0.8,1.2, 2.5, 0.1, 10.3, 2.0, 3.7,

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 12

MIE360 16 Continuous Input Distn - MIE360 Computer Modeling...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online