This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Maximum likelihood estimators and least squares November 11, 2010 1 Maximum likelihood estimators A maximum likelihood estimate for some hidden parameter (or parameters, plural) of some probability distribution is a number computed from an i.i.d. sample X 1 , ..., X n from the given distribution that maximizes something called the likelihood function. Suppose that the distribution in question is governed by a pdf f ( x ; 1 , ..., k ), where the i s are all hidden parameters. The likelihood function associated to the sample is just L ( X 1 , ..., X n ) = n productdisplay i =1 f ( X i ; 1 , ..., k ) . For example, if the distribution is N ( , 2 ), then L ( X 1 , ..., X n ; , 2 ) = 1 (2 ) n/ 2 n exp parenleftbigg 1 2 2 ( ( X 1 ) 2 + + ( X n ) 2 ) parenrightbigg . (1) Note that I am using and 2 to indicate that these are variable (and also to set up the language of estimators). Why should one expect a maximum likelihood esimate for some parameter to be a good estimate? Well, what the likelihood function is measuring is how likely ( X 1 , ..., X n ) is to have come from the distribution assuming particular values for the hidden parameters; the more likely this is, the closer one would think that those particular choices for hidden parameters are to the true values. Lets see two examples: 1 Example 1. Suppose that X 1 , ..., X n are generated from a normal distribu tion having hidden mean and variance 2 . Compute a MLE for from the sample. Solution. As we said above, the likelihood function in this case is given by (1). It is obvious that to maximize L as a function of and 2 we must minimize n summationdisplay i =1 ( X i ) 2 as a function of . Upon taking a derivative with respect to and setting it to 0, we find that...
View Full
Document
 Spring '08
 Staff
 Statistics, Probability

Click to edit the document details