maximum_likelihood - Maximum likelihood estimators and...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Maximum likelihood estimators and least squares November 11, 2010 1 Maximum likelihood estimators A maximum likelihood estimate for some hidden parameter λ (or parameters, plural) of some probability distribution is a number ˆ λ computed from an i.i.d. sample X 1 , ..., X n from the given distribution that maximizes something called the “likelihood function”. Suppose that the distribution in question is governed by a pdf f ( x ; λ 1 , ..., λ k ), where the λ i ’s are all hidden parameters. The likelihood function associated to the sample is just L ( X 1 , ..., X n ) = n productdisplay i =1 f ( X i ; λ 1 , ..., λ k ) . For example, if the distribution is N ( μ, σ 2 ), then L ( X 1 , ..., X n ; ˆ μ, ˆ σ 2 ) = 1 (2 π ) n/ 2 ˆ σ n exp parenleftbigg- 1 2ˆ σ 2 ( ( X 1- ˆ μ ) 2 + ··· + ( X n- ˆ μ ) 2 ) parenrightbigg . (1) Note that I am using ˆ μ and ˆ σ 2 to indicate that these are variable (and also to set up the language of estimators). Why should one expect a maximum likelihood esimate for some parameter to be a “good estimate”? Well, what the likelihood function is measuring is how likely ( X 1 , ..., X n ) is to have come from the distribution assuming particular values for the hidden parameters; the more likely this is, the closer one would think that those particular choices for hidden parameters are to the true values. Let’s see two examples: 1 Example 1. Suppose that X 1 , ..., X n are generated from a normal distribu- tion having hidden mean μ and variance σ 2 . Compute a MLE for μ from the sample. Solution. As we said above, the likelihood function in this case is given by (1). It is obvious that to maximize L as a function of ˆ μ and ˆ σ 2 we must minimize n summationdisplay i =1 ( X i- ˆ μ ) 2 as a function of ˆ μ . Upon taking a derivative with respect to ˆ μ and setting it to 0, we find that...
View Full Document

This note was uploaded on 10/23/2011 for the course MATH 3225 taught by Professor Staff during the Spring '08 term at Georgia Tech.

Page1 / 5

maximum_likelihood - Maximum likelihood estimators and...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online