Solution to tutorial 10 - Tutorial 10 1. Derive the...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Tutorial 10 1. Derive the weighted least square normal equations for Ftting a simple linear regression func- tion when σ 2 i = kX i ,where k> 0isaconstant . Let Q w ( b 0 ,b 1 )= n X i =1 1 i ( Y i - b 0 - b 1 X i ) 2 and ∂Q w ( b 0 1 ) ∂b 0 = - 2 n X i =1 1 i ( Y i - b 0 - b 1 X i ) w ( b 0 1 ) 1 = - 2 n X i =1 1 k ( Y i - b 0 - b 1 X i ) . Setting the derivatives equal to zero, simplifying, n X i =1 Y i X i - b 0 n X i =1 1 X i - nb 1 =0 n X i =1 Y i - nb 0 - b 1 n X i =1 X i . we have b 1 = n i =1 Y i X i - ¯ Y n i =1 1 X i n - ¯ X n i =1 1 X i b 0 = ¯ Y - b 1 ¯ X. where ¯ X = n X i =1 X i /n, ¯ Y = n X i =1 Y i /n, 2. ±or linear regression model Y i = β 0 + β 1 X i 1 + ... + β p X ip + ε i ,i =1 , ..., n. with Var ( ε 1 . . . ε n σ 2 1 0 ... 0 0 σ 2 2 ... 0 ··· 00 ... σ 2 n (a) If the LSE, b , is used, is the estimator unbiased? what is the variance of the estimated coefficients, ( b ). (b) with w i 2 i , derive the weighted least square estimator b w , and calculate ( b w ) 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
(a) The LSE is b =( X 0 X ) 1 X 0 Y , We have E ( b )=( X 0 X ) 1 X 0 E ( Y X 0 X ) 1 X 0 X β = β SO, it is still unbiased. Because
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 10/04/2010 for the course STAT ST3131 taught by Professor Xiayingcun during the Fall '09 term at National University of Singapore.

Page1 / 3

Solution to tutorial 10 - Tutorial 10 1. Derive the...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online