This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Lecture 5 Let us give one more example of MLE. Example 3. The uniform distribution U [0 , ] on the interval [0 , ] has p.d.f. f ( x | ) = 1 , x , , otherwise The likelihood function ( ) = n Y i =1 f ( X i | ) = 1 n I ( X 1 , . . . , X n [0 , ]) = 1 n I (max( X 1 , . . . , X n ) ) . Here the indicator function I ( A ) equals to 1 if A happens and 0 otherwise. What we wrote is that the product of p.d.f. f ( X i | ) will be equal to 0 if at least one of the factors is 0 and this will happen if at least one of X i s will fall outside of the interval [0 , ] which is the same as the maximum among them exceeds . In other words, ( ) = 0 if < max( X 1 , . . . , X n ) , and ( ) = 1 n if max( X 1 , . . . , X n ) . Therefore, looking at the figure 5.1 we see that = max( X 1 , . . . , X n ) is the MLE. 5.1 Consistency of MLE. Why the MLE converges to the unkown parameter ? This is not immediately obvious and in this section we will give a sketch of why this happens. 17 LECTURE 5. 18 (29 max(X1, ..., Xn) Figure 5.1: Maximize over First of all, MLE is a maximizer of L n = 1 n n X i =1 log f ( X i | ) which is just a log-likelihood function normalized by 1 n (of course, this does not affect the maximization). L n ( ) depends on data. Let us consider a function l ( X | ) = log f ( X | ) and define L ( ) = l ( X | ) , where we recall that is the true uknown parameter of the sample X 1 , . . . , X n . By the law of large numbers, for any...
View Full Document
This note was uploaded on 10/11/2009 for the course STATISTICS 18.443 taught by Professor Dmitrypanchenko during the Spring '09 term at MIT.
- Spring '09