This
** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*
**Unformatted text preview: **2028: Basic Statistical Methods Solutions - Homework 3 I Point Estimation 1 7.41 (a) and (b) page 246 (a) Since E ( X 2 ) = 2 θ = 1 n ∑ n i X 2 i an MOM estimator for θ is is θ = 1 2 n n i X 2 i which is unbiased. (b) The likelihood and log-likelihood functions are L ( θ ) n i =1 x i e- x 2 i / (2 θ ) θ , l ( θ ) = log( x i )- ∑ x 2 i 2 θ- n log( θ ) . It follows that the likelihood equation is ∂l ( θ ) ∂θ = ∑ x 2 i 2 θ 2- n θ = 0 The solution to the above equation is then θ = 1 2 n n i X 2 i and this is the same result obtained in part (a). 2 (a) We obtain an MOM estimator by equation the first moment of the U (0 ,a ) with the first sample moment, the sample mean: E ( X i ) = ¯ X where E ( X i ) = ( a + 0) / 2 resulting in a = 2 ¯ X . (b) The likelihood function is L ( a ) = n i =1 1 a I ( x i ≤ a ) = 1 a n n i =1 I ( x i ≤ a ) = 1 a n I (max { x 1 ,...,x n } ≤ a ) Because L ( a ) = 0 for a < max { x 1 ,...,x n } and L ( a ) = 1 a n > for a ≥ max { x 1 ,...,x n } , we first note that the solution to max a L ( a ) is larger or equal than max { x 1 ,...,x n } . More- over, since L ( a ) = 1 a n is a decreasing function in a , its maximum is attained at its minimum value which is max { x 1 ,...,x n } . Therefore, a = max { x 1 ,...,x n } . (c) The bias of a is | E ( a )- a | = | na n + 1- a | = | na- ( n + 1) a | n + 1 = a n + 1 which converges to zero as n increases. (d) The MLE is biased as provided above, but the MOM estimator in part (a) is unbiased estimator. 1 3 7.13 page 231 Based the additivity property of the normal distribution we find that ¯ X 1- ¯ X 2 is normally distributed. It remains to find the mean and the variance. Using the formulas for the mean and the variance of the sampling distribution of the sample mean we find that E ( ¯ X 1- ¯ X 2 ) = E ( ¯ X 1 )- E ( ¯ X 2 ) = μ 1- μ 2 V ( ¯ X 1- ¯ X 2 ) = V ( ¯ X 1 ) + V ( ¯ X 2 ) = σ 1...

View
Full Document