LINREG2

# [see(60 under heteroscedasticity takes the form ˆ β

This preview shows pages 18–21. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: [see (60)] under heteroscedasticity takes the form ˆ β var( \$ \$ | X 1 ,..., X n ) ' E [( \$ \$ & \$ ) 2 | X 1 ,..., X n ] ' ' n j ' 1 ( X j & ¯ X ) 2 R ( X j ) ' n i ' 1 ( X i & ¯ X ) 2 2 . (36) A cure for the heteroscedasticity problem is to replace the standard error of by ˆ β 6 Breusch, T. and A. Pagan (1979), "A Simple Test for Heteroscedasticity and Random Coefficient Variation", Econometrica 47, 1287-1294. 7 In the multiple regression case the degrees of freedom is equal to the number of parameters minus 1 for the intercept. 19 # F \$ \$ ' n n & 2 ' n j ' 1 ( X j & ¯ X ) 2 \$ U 2 j ' n i ' 1 ( X i & ¯ X ) 2 2 . (37) This is known as the Heteroscedasticity Consistent (H.C.) standard error. The H.C. t-value then becomes Under the null hypothesis β = 0 this t-value is no longer t distributed, but ˜ t ˆ β ' ˆ β /˜ σ ˆ β . the standard normal approximation remains valid if the sample size n is large. A popular test for heteroscedasticity is the Breusch-Pagan 6 test. Given that E [ U 2 j | X j ] ' g ( ( % ( 1 X j ) for some unknown function g (.). (38) the Breusch-Pagan test tests the null hypothesis H : ( 1 ' ] E [ U 2 j | X j ] ' g ( ( ) ' F 2 , say (39) against the alternative hypothesis H : ( 1 … ] E [ U 2 j | X j ] ' g ( ( % ( 1 X j ) ' R ( X j ), say . (40) Under the null hypothesis (39) of homoskedasticity the test statistic of the Breusch-Pagan test has a distribution 7 , and the test is conducted right-sided. χ 2 1 12. How close are OLS estimators ? The ice cream data in Table 1 is not based on any actual observations on sales and temperature; I have picked the numbers for and quite arbitrarily. Therefore, there is no way X j Y j to find out how close the OLS estimates are to the unknown parameters α ˆ α ' & 0.25, ˆ β ' 1.5 and β . Actually, we do not know either whether the linear regression model (2) and its assumptions are applicable to this artificial data. In order to show how well OLS estimators approximate the corresponding parameters I 8 Via the EasyReg International menus File 6 Choose an input file 6 Create artificial data. Rather than generating one random sample of size n = 1000 and then using subsamples of sizes n = 10 and n = 100, these samples have been generates separately for n = 10, n = 100 and n = 1000. 20 have generated random samples 8 for three sample sizes: n = 10, n = 100 and ( Y 1 , X 1 ),...,( Y n , X n ) n = 1000, as follows. The explanatory variables have been drawn independently from the X j χ 2 1 distribution, the regression errors have been drawn independently from the N(0,1) U j distribution, and the ‘s have been generated by Y j Y j ' 1 % X j % U j , j ' 1,2,..., n . (41) Thus, in this case the parameters α and β in model (2) are α = 1 and β = 1, and the standard error of is σ = 1. Moreover, note that the Assumptions I *-IV * hold for model (41). U j The true R 2 can be defined by R 2 ' 1 & E [ SSR ] E [ TSS ] ' 1 & ( n & 2) σ 2 ' n j ' 1 E [( Y j & ¯ Y ) 2 ] ....
View Full Document

{[ snackBarMessage ]}

### Page18 / 29

[see(60 under heteroscedasticity takes the form ˆ β var \$...

This preview shows document pages 18 - 21. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online