Lecture 18-2007

# Lecture 18-2007 - Mean Squared Error and Maximum Likelihood...

This preview shows pages 1–3. Sign up to view the full content.

Mean Squared Error and Maximum Likelihood Lecture XVIII I. Mean Squared Error A. As stated in our discussion on closeness, one potential measure for the goodness of an estimator is 2 ˆ E where ˆ is the estimator and is the true value. B. In the preceding example, the mean square error of the estimate can be written as: 2 ET where is the true parameter value between zero and one. C. This expected value is conditioned on the probability of T at each level value of . For example, if 0 then the probability of each X becomes: 1 ,1 X X PX If the two events are independent: 1 12 , , 1 XX P X X The mean squared error at any theta can then be derived as 2 2 2 0,0, 0 2 0,1, .5 1,1, 1 MSE P P P D. The mean square error for S can similarly be computed as: 22 0, 0 1, 1 MSE P P E. Finally, the mean square error of W can be written as 2 (.5 ) MSE F. The mean square errors for each estimator can then be depicted as:

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
AEB 6933–Mathematical Statistics for Food and Resource Economics Lecture XVIII Professor Charles Moss Fall 2007 2 G. Definition 7.2.1. Let X and Y be two estimators of . We say that X is better (or more efficient) than Y if 22 E X E Y for all and strictly less than for at least one .
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 07/18/2011 for the course AEB 6933 taught by Professor Carriker during the Fall '09 term at University of Florida.

### Page1 / 5

Lecture 18-2007 - Mean Squared Error and Maximum Likelihood...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online