This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: MSE ( ) = E (  ) 2 = V ar ( ) + bias ( 2 ) to compare the estimators. MSE ( b MOM ) = 1 n 1 MSE ( b MLE ) = 1 n 2 + b1 nb 2 = 2 n 2 MSE b MLE + 1 n = 1 n 2 Since MSE b MLE + 1 n MSE ( b MLE ) MSE ( b MOM ), b MLE + 1 n is the better of the three using this criterion. Also note that A MLE has fundamental aws. Depending on your sample points, the estimator may end up being larger than some of the observed values, which is impossible under the uniform distribution. 2...
View
Full
Document
This note was uploaded on 07/22/2009 for the course IEOR 165 taught by Professor Shanthikumar during the Summer '08 term at University of California, Berkeley.
 Summer '08
 SHANTHIKUMAR

Click to edit the document details