The restricted one used logg logy the calculation is

Info icon This preview shows pages 33–42. Sign up to view the full content.

View Full Document Right Arrow Icon
The restricted one used logG-logY. The calculation is safe using the sums of squared residuals. ™    32/50
Image of page 33

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 8: Hypothesis Testing Wald Distance Measure Testing more generally about a single parameter. Sample estimate is bk Hypothesized value is βk How far is βk from bk? If too far, the hypothesis is inconsistent with the sample evidence. Measure distance in standard error units t = (bk - βk)/Estimated vk. If t is “large” (larger than critical value), reject the hypothesis. ™    33/50
Image of page 34
Part 8: Hypothesis Testing The Wald Statistic -1 Most test statistics are Wald distance measures W = (random vector - hypothesized value)'   times         [Variance of difference]                      times        (random vector - hypothesized value)   0 0 -1 0    = Normalized distance measure     = (  -  ) [Var(  -  )] (  -  ) Distributed as chi-squared(J) if (1) the distance is  normally distributed and (2) the variance matrix is the true one, not the esti q q ' q q q q mate. ™    34/50
Image of page 35

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 8: Hypothesis Testing Test Statistics Forming test statistics: For distance measures use Wald type of distance measure, W = m [Est.Var( m )]-1 m An important relationship between t and F For a single restriction, m = r’b - q . The variance is r ’(Var[ b ]) r The distance measure is m / standard error of m . ™    35/50
Image of page 36
Part 8: Hypothesis Testing Application Time series regression, LogG = 1 + 2logY + 3logPG + 4logPNC + 5logPUC + 6logPPT + 7logPN + 8logPD + 9logPS + Period = 1960 - 1995. Note that all coefficients in the model are elasticities. ™    36/50
Image of page 37

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 8: Hypothesis Testing Full Model ---------------------------------------------------------------------- Ordinary least squares regression ............ LHS=LG Mean = 5.39299 Standard deviation = .24878 Number of observs. = 36 Model size Parameters = 9 Degrees of freedom = 27 Residuals Sum of squares = .00855 <******* Standard error of e = .01780 <******* Fit R-squared = .99605 <******* Adjusted R-squared = .99488 <******* --------+------------------------------------------------------------- Variable| Coefficient Standard Error t-ratio P[|T|>t] Mean of X --------+------------------------------------------------------------- Constant| -6.95326*** 1.29811 -5.356 .0000 LY| 1.35721*** .14562 9.320 .0000 9.11093 LPG| -.50579*** .06200 -8.158 .0000 .67409 LPNC| -.01654 .19957 -.083 .9346 .44320 LPUC| -.12354* .06568 -1.881 .0708 .66361 LPPT| .11571 .07859 1.472 .1525 .77208 LPN| 1.10125*** .26840 4.103 .0003 .60539 LPD| .92018*** .27018 3.406 .0021 .43343 LPS| -1.09213*** .30812 -3.544 .0015 .68105 --------+------------------------------------------------------------- ™    37/50
Image of page 38
Part 8: Hypothesis Testing Test About One Parameter Is the price of public transportation really relevant? H0 : 6 = 0. Confidence interval: b6 t(.95,27)  Standard error = .11571  2.052(.07859) = .11571  .16127 = (-.045557 ,.27698) Contains 0.0. Do not reject hypothesis Distance measure: (b6 - 0) / sb6 = (.11571 - 0) / .07859 = 1.472 < 2.052. Regression fit if drop? Without LPPT, R-squared= .99573 Compare R2, was .99605, F(1,27) = [(.99605 - .99573)/1]/[(1-.99605)/(36-9)] = 2.187 = 1.4722 (with some rounding difference) ™    38/50
Image of page 39

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 8: Hypothesis Testing Robust Tests p The Wald test generally will (when properly constructed) be more robust to failures of the narrow model assumptions than the t or F p Reason: Based on “robust” variance estimators and asymptotic results that hold in a wide range of circumstances. p Analysis: Later in the course – after developing asymptotics. ™    39/50
Image of page 40
Part 8: Hypothesis Testing Particular Cases Some particular cases: One coefficient equals a particular value: F = [(b - value) / Standard error of b ]2 = square of familiar t ratio.
Image of page 41

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 42
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern