{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Dr. Hackney STA Solutions pg 100

# Dr. Hackney STA Solutions pg 100 - Second Edition 7-3 The...

This preview shows page 1. Sign up to view the full content.

Second Edition 7-3 The likelihood function is L ( θ | x ) = n i =1 1 θ I [0 ] ( x i ) = 1 θ n I [0 ] ( x ( n ) ) I [0 , ) ( x (1) ) , where x (1) and x ( n ) are the smallest and largest order statistics. For θ x ( n ) , L = 1 n , a decreasing function. So for θ x ( n ) , L is maximized at ˆ θ = x ( n ) . L = 0 for θ < x ( n ) . So the overall maximum, the MLE, is ˆ θ = X ( n ) . The pdf of ˆ θ = X ( n ) is nx n - 1 n , 0 x θ . This can be used to calculate E ˆ θ = n n + 1 θ, E ˆ θ 2 = n n + 2 θ 2 and Var ˆ θ = 2 ( n + 2)( n + 1) 2 . ˜ θ is an unbiased estimator of θ ; ˆ θ is a biased estimator. If n is large, the bias is not large because n/ ( n + 1) is close to one. But if n is small, the bias is quite large. On the other hand, Var ˆ θ < Var ˜ θ for all θ . So, if n is large, ˆ θ is probably preferable to ˜ θ . 7.10 a. f ( x | θ ) = i α β α x α - 1 i I [0 ] ( x i ) = α β α n ( i x i ) α - 1 I ( -∞ ] ( x ( n ) ) I [0 , ) ( x (1) ) = L ( α, β | x ). By the Factorization Theorem, ( i X i , X ( n ) ) are sufficient. b. For any fixed α , L ( α, β | x ) = 0 if β < x ( n ) , and L ( α, β | x ) a decreasing function of β if β x ( n ) . Thus,
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Ask a homework question - tutors are online