This preview shows pages 1–4. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Econ 103 UCLA, Fall 2010 Problem Set 2 Solutions by Joe Kuehn Part 1: True or False and explain briefly why. 1. To obtain the slope estimator using the least squares principle, we divide the sample covariance of X and Y by the sample variance of Y . False Instead to get the the slope estimator we divide the sample covariance of X and Y by the sample variance of X. 2. The OLS intercept coefficient is equal to the average of the Y i in the sample. False = Y 1 X 3. Among all unbiased estimators that are weighted averages of Y 1 ,...,Y n , 1 is the most unbiased estimator of 1 . False 1 is unbiased so it is not more or less unbiased than any other unbiased estima tor. The GaussMarkov theorem says that it is the best (most efficient) among unbiased estimators.) 4. When the estimated slope coefficient in the simple regression model, 1 is zero, then R 2 = 0 . True When 1 = 0 then X i explains none of the variation of Y i , and so the ESS (Explained Sum of Squares) = 0. Thus we have R 2 = ESS TSS = 0 5. The standard error of the regression is equal to 1 R 2 . False SER = 1 n 2 n i =1 u i 2 and 1 R 2 = P n i =1 u i 2 P n i =1 ( Y i Y ) 2 , and the two are not equal. 6. Heteroskedasticity is when the variance of u i depends on the value of X i . True This is true by the definition of heteroskedasticity. 7. The output from the Stata command regress y x reports the pvalue associated with the test of the null hypothesis that 1 = 0 . True The pvalue associated with the test of the null hypothesis that 1 = 0 , is reported in a Stata regression under the column P >  t  . 8. ESS=SSR+TSS. False ESS = TSS SSR 1 Econ 103 UCLA, Fall 2010 9. In the simple regression we can compute R 2 as Cov ( X,Y ) Y X 2 True In class it was stated that R 2 equals the squared sample correlation coefficient between X and Y. 10. The sample average of the OLS residuals is zero. True n i =1 u i n = Y Y i = Y 1 n n X i =1 ( Y 1 X + 1 X i ) = Y Y + 1 X 1 X = 0 11. Multiplying the dependent variable by 100 and the explanatory variable by 100,000 leaves the OLS estimate of the slope the same. False new = n i =1 (100 , 000 x i 100 , 000 x )(100 y i 100 y ) n i =1 (100 , 000 x i 100 , 000 x ) 2 = orig 1000 12. Multiplying the dependent variable by 100 and the explanatory variable by 100,000 leaves the regression R 2 the same. True R 2 new = n i =1 ( new x y ) 2 n i =1 ( y i y ) 2 = n i =1 ( orig 1000 100 , 000 x 100 y ) 2 n i =1 (100 y i 100 y ) 2 = n i =1 100 2 ( orig x y ) 2 100 2 n i =1 ( y i y ) 2 = n i =1 ( orig x y ) 2 n i =1 ( y i y ) 2 = R 2 orig 2 Econ 103 UCLA, Fall 2010 13. In the presence of heteroskedasticity, and assuming that the usual least squares as sumptions hold, the OLS estimator is unbiased and consistent, but not BLUE.sumptions hold, the OLS estimator is unbiased and consistent, but not BLUE....
View
Full
Document
This note was uploaded on 09/23/2011 for the course ECON 103 taught by Professor Sandrablack during the Spring '07 term at UCLA.
 Spring '07
 SandraBlack

Click to edit the document details