{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

LINREG3

# ˆ β k β k σ n j 1 w 2 k j n 18 the error variance

This preview shows pages 6–9. Sign up to view the full content.

! ˆ β k & β k σ ' n j ' 1 w 2 k , j - N (18) The error variance can be estimated similar to the case of the two-variable linear σ 2 regression model, namely using the sum of squared residuals SSR ' ' n j ' 1 ˆ U 2 j , (19) where ˆ U j ' Y j & ' k i ' 1 ˆ β i X i , j (20) is the OLS residual. It can be shown that under Assumptions 1-3, ' n j ' 1 ˆ U 2 j σ 2 - χ 2 n & k . (21) Since the expected value of a distributed random variable is n ! k , the result (21) suggests to χ 2 n & k estimate by σ 2 ˆ σ 2 ' 1 n & k j n j ' 1 ˆ U 2 j . (22) Due to (21), this estimator is unbiased: Moreover, it can be shown that under E σ 2 ] ' σ 2 . Assumptions 1-3, is independent of the ‘s, hence it follows from (18) and (21) and ' n j ' 1 ˆ U 2 j ˆ β i the definition of the t distribution that under Assumptions 1 and 2,

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
7 ˆ β 1 & β 1 ˆ σ ' n j ' 1 w 2 1, j - t n & k !! ! ˆ β k & β k ˆ σ ' n j ' 1 w 2 k , j - t n & k (23) The denominators involved are the standard errors of the corresponding OLS estimators: ˆ σ i ' ˆ σ ' n j ' 1 w 2 i , j ( ' standard error of ˆ β i ). (24) The results (18), (21) and (23) do not hinge on the assumption that the explanatory variables are nonrandom, though. They also hold if we replace Assumption 2 by X i , j Assumption 2 * : The model variables Y j , X 1, j ,..., X k ! 1, j are independent and identically distributed across the observations j = 1,. ..., n , and if we replace Assumption 3 by Assumption 3 * : Conditionally on X 1, j ,...,X k ! 1, j the errors U j are N (0, σ 2 ) distributed . Proposition 1 : Under Assumptions 1, 2 * and 3 * the results (18), (21) and (23) carry over . Furthermore, if instead of Assumption 3 * , Assumption 3 ** : < 4 and for i = E [ U j | X 1, j ,.... , X k & 1, j ] ' 0, E [ U 2 j | X 1, j ,.... , X k & 1, j ] ' σ 2 E [ X 2 i , j ]< 4 1,. .., k ! 1, then it can be shown that
1 This means that for any constant K > 0, lim n 64 P ( ˆ t i > K ) ' 1. 2 This means that for any constant K > 0, lim n 64 P ( ˆ t i < & K ) ' 1. 8 Proposition 2 : Under Assumptions 1, 2 * and 3 ** , ˆ β 1 & β 1 ˆ σ ' n j ' 1 w 2 1, j - N (0,1) !! ! ˆ β k & β k ˆ σ ' n j ' 1 w 2 k , j - N (0,1) (25) provided that n is large. 3. Testing parameter hypotheses The results (23) and (25) can be used to test whether a particular coefficient is zero or β i not, similar to the case of the two-variable linear regression model. The test statistic involved is the corresponding t-value, ˆ t i ' ˆ β i ˆ σ i ' ˆ β i ˆ σ ' n j ' 1 w 2 i , j . (26) Proposition 3 : Under the null hypothesis and the conditions of Proposition 1 , β i ' 0 ˆ t i - t n & k , and under the null hypothesis involved and the conditions of Proposition 2 , ˆ t i - N (0,1). Moreover, if then converges in probability to 4 if n 6 4 1 , and if then β i >0 ˆ t i β i <0 ˆ t i converges in probability to !4 if n 6 4 2 . The test can now be conducted in the same way as in the case of the two-variable linear regression model, either left-sided, right-sided or two-sided. The only difference is the degrees of freedom, which is n ! k instead of n 2 in the two-variable linear regression case.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page6 / 14

ˆ β k β k σ n j 1 w 2 k j N 18 The error variance can...

This preview shows document pages 6 - 9. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online