This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: MSE ( ˆ θ ) = E ( ˆ θ-θ ) 2 = V ar ( ˆ θ ) + bias ( ˆ θ 2 ) to compare the estimators. MSE ( b MOM ) = 1 n 1 MSE ( b MLE ) = 1 n 2 + ± b-1 n-b ² 2 = 2 n 2 MSE ± b MLE + 1 n ² = 1 n 2 Since MSE ³ b MLE + 1 n ´ ≤ MSE ( b MLE ) ≤ MSE ( b MOM ), b MLE + 1 n is the better of the three using this criterion. Also note that A MLE has fundamental ﬂaws. Depending on your sample points, the estimator may end up being larger than some of the observed values, which is impossible under the uniform distribution. 2...
View Full Document
- Summer '08
- Probability theory, probability density function, Mean squared error, bM LE