This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Mean Squared Error and Maximum Likelihood: Lecture XVIII Charles B. Moss October 12, 2010 I. Mean Squared Error A. As stated in our discussion on closeness, one partential measure for the goodness of an estimator is E 2 (1) B. In the preceding example, the mean square error of the estimate can be written as E h ( T ) 2 i (2) where is the true parameter value between zero and one. C. This expected value is conditioned on the probability of T at each level value of . For example, if = 0 then the probability of each X becomes P [ X, ] = X (1 ) 1 X (3) If the two events are independent P [ X 1 , X 2 , ] = X 1 + X 2 (1 ) 1 X 1 X 2 (4) The mean squared error at any theta can then be derived as MSE ( ) = P [0 , , ] (0 ) 2 +2 P [0 , 1 , ](0 . 5 ) 2 + P [1 , 1 , ] (1 ) 2 . (5) 1 AEB 6571 Econometric Methods I Professor Charles B. Moss Lecture XVIII Fall 2010 Figure 1: Comparison of MSE for Various Estimators D. The mean squared error for S can similarly be computed as MSE ( ) = P [0 , ] (0 ) 2 + P [1 , ] (1 ) 2 (6) E. Finally, the mean square error of W can be written as MSE ( ) = (0 . 5 ) 2 (7) F. The mean squared error for each estimator is presented in Figure 1....
View
Full
Document
This note was uploaded on 07/15/2011 for the course AEB 6180 taught by Professor Staff during the Spring '10 term at University of Florida.
 Spring '10
 Staff

Click to edit the document details