{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

PS2solutions

# PS2solutions - Econ 103 UCLA Fall 2010 Problem Set 2...

This preview shows pages 1–4. Sign up to view the full content.

Econ 103 UCLA, Fall 2010 Problem Set 2 Solutions by Joe Kuehn Part 1: True or False and explain briefly why. 1. To obtain the slope estimator using the least squares principle, we divide the sample covariance of X and Y by the sample variance of Y . False Instead to get the the slope estimator we divide the sample covariance of X and Y by the sample variance of X. 2. The OLS intercept coefficient ˆ β 0 is equal to the average of the Y i in the sample. False ˆ β 0 = ¯ Y - ˆ β 1 ¯ X 3. Among all unbiased estimators that are weighted averages of Y 1 , ..., Y n , ˆ β 1 is the most unbiased estimator of β 1 . False ˆ β 1 is unbiased so it is not more or less unbiased than any other unbiased estima- tor. The Gauss-Markov theorem says that it is the best (most efficient) among unbiased estimators.) 4. When the estimated slope coefficient in the simple regression model, ˆ β 1 is zero, then R 2 = 0 . True When ˆ β 1 = 0 then X i explains none of the variation of Y i , and so the ESS (Explained Sum of Squares) = 0. Thus we have R 2 = ESS TSS = 0 5. The standard error of the regression is equal to 1 - R 2 . False SER = 1 n - 2 n i =1 ˆ u i 2 and 1 - R 2 = P n i =1 ˆ u i 2 P n i =1 ( Y i - ¯ Y ) 2 , and the two are not equal. 6. Heteroskedasticity is when the variance of u i depends on the value of X i . True This is true by the definition of heteroskedasticity. 7. The output from the Stata command “regress y x” reports the p -value associated with the test of the null hypothesis that β 1 = 0 . True The p -value associated with the test of the null hypothesis that β 1 = 0 , is reported in a Stata regression under the column “ P > | t | .” 8. ESS=SSR+TSS. False ESS = TSS - SSR 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Econ 103 UCLA, Fall 2010 9. In the simple regression we can compute R 2 as Cov ( X, Y ) σ Y σ X 2 True In class it was stated that R 2 equals the squared sample correlation coefficient between X and Y. 10. The sample average of the OLS residuals is zero. True n i =1 ˆ u i n = ¯ Y - ¯ ˆ Y i = ¯ Y - 1 n n X i =1 ( ¯ Y - ˆ β 1 ¯ X + ˆ β 1 X i ) = ¯ Y - ¯ Y + ˆ β 1 ¯ X - ˆ β 1 ¯ X = 0 11. Multiplying the dependent variable by 100 and the explanatory variable by 100,000 leaves the OLS estimate of the slope the same. False ˆ β new = n i =1 (100 , 000 · x i - 100 , 000 · ¯ x )(100 · y i - 100 · ¯ y ) n i =1 (100 , 000 · x i - 100 , 000¯ x ) 2 = ˆ β orig 1000 12. Multiplying the dependent variable by 100 and the explanatory variable by 100,000 leaves the regression R 2 the same. True R 2 new = n i =1 ( ˆ β new x - ¯ y ) 2 n i =1 ( y i - ¯ y ) 2 = n i =1 ( ˆ β orig 1000 · 100 , 000 x - 100 · ¯ y ) 2 n i =1 (100 · y i - 100 · ¯ y ) 2 = n i =1 100 2 · ( ˆ β orig x - ¯ y ) 2 100 2 · n i =1 ( y i - ¯ y ) 2 = n i =1 ( ˆ β orig x - ¯ y ) 2 n i =1 ( y i - ¯ y ) 2 = R 2 orig 2
Econ 103 UCLA, Fall 2010 13. In the presence of heteroskedasticity, and assuming that the usual least squares as- sumptions hold, the OLS estimator is unbiased and consistent, but not BLUE.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 15

PS2solutions - Econ 103 UCLA Fall 2010 Problem Set 2...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online