Unformatted text preview: Week 5 Tutorial Exercises Review Questions (these may or may not be discussed in tutorial classes) What are the CLM assumptions? These are MLR1, MLR2, MLR3 and MLR6. MLR6 is a very strong assumption that implies both MLR4 and MLR5. What is the sampling distribution of the OLS estimators under the CLM assumptions? The OLS estimators under the CLM assumptions follow the normal distribution. What are the standard errors of the OLS estimators? The standard error is the square root of the estimated variance of the estimator. What is the null hypothesis about a parameter? In this context, the null hypothesis is a statement that assigns a known value (or hypothesised value) to the parameter of interest. Usually, the null is a maintained proposition (from economic theories or experience) that will only be rejected on very strong evidence. What is a one‐tailed (two‐tailed) alternative hypothesis? This should be clear once you finish your reading. In testing hypotheses, what is a Type 1 (Type 2) error? What is the level of significance? Type 1 error is the rejection of a true null. Type 2 error is the non‐rejection of a false null. The decision rule we use can be stated as “reject the null if the t‐statistic exceeds the critical value”. How is the critical value determined? The critical value is determined by the level of significance (or level of confidence in the case of confidence intervals) and the distribution of the test statistic. For example, for t‐stat, we use the t‐distribution when the sample size is small, and the normal distribution when the sample size is large (df > 120). Justify the statement “Given the observed test statistic, the p‐value is the smallest significant level at which the null hypothesis would be rejected.” If you used the observed t‐stat as your critical value, you would just reject the null (because the t‐stat is equal to this value). This “critical value” (= the t‐stat value) implies a level of significance, say p, which is exactly the p‐value. For any significance level smaller than p, the corresponding critical value is more extreme than the t‐stat value and does not lead to the rejection of the null. Hence, p is the smallest significance level at which the null would be rejected. What is the 90% confidence interval for a parameter? It is an interval [L, U] defined by two sample statistics: the lower bound L and the upper bound U. The probability that [L, U] covers the parameter is 0.9. In constructing a confidence interval for a parameter, what is the level of confidence? It is the probability that the CI covers the parameter. When the level of confidence increases, how would the width of the confidence interval change (holding other things fixed)? The width will increase. Try to convince yourself that the event “the 90% confidence interval covers a hypothesised value of the parameter” is the same as the event “the null of the parameter being the hypothesised value cannot be rejected in favour of the two‐tailed alternative at the 10% level of significance.” It becomes apparent if you start with P(|t‐stat| > c) = 0.9 and disentangle the expression for the t‐stat. Problem Set (these will be discussed in tutorial classes) Q1. Wooldridge 4.1 (i) and (iii) generally cause the t statistics not to have a t distribution under H0. Homoskedasticity is one of the CLM assumptions. An important omitted variable violates Assumption MLR.4 (hence MLR.6) in general. The CLM assumptions contain no mention of the sample correlations among independent variables, except to rule out the case where the correlation is one (MLR.3). Q2. Wooldridge 4.2 (i) H0:β3 = 0. H1:β3 > 0. (ii) The proportionate effect on is .00024(50) = .012. To obtain the percentage effect, we multiply this by 100: 1.2%. Therefore, a 50 point ceteris paribus increase in ros is predicted to increase salary by only 1.2%. Practically speaking, this is a very small effect for such a large change in ros. (iii) The 10% critical value for a one‐tailed test, using df = ∞, is obtained from Table G.2 as 1.282. The t statistic on ros is .00024/.00054 ≈ .44, which is well below the critical value. Therefore, we fail to reject H0 at the 10% significance level. (iv) Based on this sample, the estimated ros coefficient appears to be different from zero only because of sampling variation. On the other hand, including ros may not be causing any harm; it depends on how correlated it is with the other independent variables (although these are very significant even with ros in the equation). Q3. Wooldridge 4.5 (i) .412 ± 1.96(.094), or about [.228, .596]. (ii) No, because the value .4 is well inside the 95% CI. (iii) Yes, because 1 is well outside the 95% CI. Q4. Wooldridge C4.8 (401ksubs_ch04.do) (i) There are 2,017 single people in the sample of 9,275. (ii) The estimated equation is = −43.04 + .799 inc + .843 age ( 4.08) (.060) (.092) n = 2,017, R2 = .119. The coefficient on inc indicates that one more dollar in income (holding age fixed) is reflected in about 80 more cents in predicted nettfa; no surprise there. The coefficient on age means that, holding income fixed, if a person gets another year older, his/her nettfa is predicted to increase by about $843. (Remember, nettfa is in thousands of dollars.) Again, this is not surprising. (iii) The intercept is not very interesting as it gives the predicted nettfa for inc = 0 and age = 0. Clearly, there is no one with even close to these values in the relevant population. (iv) The t statistic is (.843 − 1)/.092 ≈ −1.71. Against the one‐sided alternative H1: β2 < 1, the p‐
value is about .044. Therefore, we can reject H0: β2 = 1 at the 5% significance level (against the one‐sided alternative). (v) The slope coefficient on inc in the simple regression is about .821, which is not very different from the .799 obtained in part (ii). As it turns out, the correlation between inc and age in the sample of single people is only about .039, which helps explain why the simple and multiple regression estimates are not very different, see the text also. ...
View Full Document
- One '11
- critical value, nettfa, CLM assumptions