Dr. Hackney STA Solutions pg 189

# Dr. Hackney STA Solutions pg 189 - Second Edition 11-17(t c...

This preview shows page 1. Sign up to view the full content.

Second Edition 11-17 c. The EM calculations are simple here. Since y ( t ) n = ˆ a ( t ) + ˆ b ( t ) x n , the estimates of a and b must converge to the least squares estimates (since they minimize the sum of squares of the observed data, and the last term adds nothing. For ˆ σ 2 we have (substituting the least squares estimates) the stationary point ˆ σ 2 = n i =1 [ y i - a + ˆ bx i )] 2 + ˆ σ 2 n ˆ σ 2 = σ 2 obs , where σ 2 obs is the MLE from the n - 1 observed data points. So the MLE s are the same as those without the extra x n . d. Now we use the bivariate normal density (see Deﬁnition 4.5.10 and Exercise 4.45 ). Denote the density by φ ( x,y ). Then the expected complete-data log likelihood is n - 1 X i =1 log φ ( x i ,y i ) + E log φ ( X,y n ) , where after iteration t the missing data density is the conditional density of X given Y = y n , X | Y = y n n ± μ ( t ) X + ρ ( t ) ( σ ( t ) X ( t ) Y )( y n - μ ( t ) Y ) , (1 - ρ 2( t ) ) σ 2( t ) X ² . Denoting the mean by μ 0 and the variance by σ 2 0 , the expected value of the last piece in the likelihood is E log φ ( X,y n ) = - 1 2 log(2 πσ 2 X σ 2 Y (1 - ρ 2 )) - 1 2(1 - ρ 2 ) " E ³ X - μ X σ X ´ 2 - 2 ρ E ³ ( X - μ X )( y n - μ Y ) σ X σ Y ´ + ³ y n - μ Y σ Y ´ 2 # = - 1 2 log(2 πσ 2
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Ask a homework question - tutors are online