03_Estimation_part3

03_Estimation_part3 - Ch8, p.33 Notes 1. Let X1 , . . . ,...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
NTHU MATH 2820, 2008, Lecture Notes Ch8, p.33 Theorem 6.4 (TBp. 276) Notes 1. Let X 1 ,...,X n be an i.i.d. sample of size n from a pdf/pmf f ( x | θ ). I X 1 , ··· ,X n ( θ )=E ∂θ log n i =1 f ( X i | θ ) 2 =E n i =1 log f ( X i | θ ) 2 = n i =1 E log f ( X i | θ ) 2 +2 i<j E log f ( X i | θ )E log f ( X j | θ ) = n E log f ( X 1 | θ ) 2 nI ( θ ) 2. I ( θ ) is the Fisher information contained in a sample of size one. 3. The Fisher informations of independent samples are additive. 4. For i.i.d. sample, I X 1 ,...,X n ( θ )= nI ( θ )=E[ l 0 ( θ )] 2 = E[ l 00 ( θ )] Under appropriate smoothness conditions on f , I ( θ ) E log f ( X 1 | θ ) 2 = E 2 2 log f ( X 1 | θ ) . made by Shao-Wei Cheng (NTHU, Taiwan) Ch8, p.34 Proof: Since f ( x | θ ) dx =1fora l l θ , 0= f ( x | θ ) dx = f ( x | θ ) dx = log f ( x | θ ) f ( x | θ ) dx 2 2 f ( x | θ ) dx = log f ( x | θ ) f ( x | θ ) dx = 2 2 log f ( x | θ ) f ( x | θ ) dx + log f ( x | θ ) 2 f ( x | θ ) dx . (need smoothness of f for interchanging integration and di f erentiation.) Example 6.18 (Fisher information of i.i.d. Bernoulli B ( θ )) Let X 1 n be i.i.d. from Bernoulli distribution B ( θ ) (i.e., the pmf of X i is, θ x (1 θ ) 1 x , for x { 0 , 1 } ) , then E ( X i θ and Var ( X i θ (1 θ ). For a single observatoin X i ,th e f rst and second deratives of its log likelihood are: log f ( x | θ x log θ +(1 x )log(1 θ ) , log f ( x | θ ) / = x/ θ (1 x ) / (1 θ )=( x θ ) / [ θ (1 θ )] , 2 log f ( x | θ ) / 2 θ = x/ θ 2 (1 x ) / (1 θ ) 2 .
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
NTHU MATH 2820, 2008, Lecture Notes Ch8, p.35 The Fisher information of a single observation, say X 1 ,is I ( θ )= E X 1 θ θ (1 θ ) 2 = E [( X 1 θ ) 2 ] θ 2 (1 θ ) 2 = Var ( X 1 ) θ 2 (1 θ ) 2 = θ (1 θ ) θ 2 (1 θ ) 2 = 1 θ (1 θ ) . I ( θ E X 1 θ 2 1 X 1 (1 θ ) 2 = θ θ 2 + 1 θ (1 θ ) 2 = 1 θ + 1 1 θ = 1 θ (1 θ ) . The Fisher information of observations X 1 ,...,X n is I X 1 , ··· ,X n ( θ nI ( θ n θ (1 θ ) . Notice that I X 1 , ··· ,X n ( θ ) increases when n increases, increases when θ 0or θ 1, reaches a minumum 4 n at θ =0 . 5. made by Shao-Wei Cheng (NTHU, Taiwan) Ch8, p.36 Consider a single observation Y Binomial( n , θ ). The pmf of Y is f ( y | θ n y θ y (1 θ ) n y , for y { 0 , 1 ,...,n } . The second derative of log likelihood is 2 log f ( y | θ ) / 2 θ = y/ θ 2 ( n y ) / (1 θ ) 2 . The Fisher information of Y I Y ( θ E Y θ 2 n Y (1 θ ) 2 = n θ θ 2 + n n θ (1 θ ) 2 = n θ (1 θ ) . Note that I Y ( θ )isthesameas I X 1 , ··· ,X n ( θ ) . Theorem 6.5 (consistency of MLE, TBp. 275) Under appropriate smoothness conditions of f , the MLE from an i.i.d. sample is consistent. Proof (sketch): Letusdenotethetruevalueof θ by θ 0 . The MLE maximizes l ( θ ) n = 1 n n i =1 log f ( X i | θ ) . The weak law of large numbers implies that l ( θ ) n P −→ E θ 0 [log f ( X | θ )] = log f ( x | θ ) f ( x | θ 0 ) dx as n →∞ . [log f ( x | θ )] f ( x | θ 0 ) dx and [log f ( x | θ )] f ( x | θ ) dx ?
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 03/11/2011 for the course STA 506 taught by Professor Lisa during the Spring '11 term at West Chester.

Page1 / 10

03_Estimation_part3 - Ch8, p.33 Notes 1. Let X1 , . . . ,...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online