Econ 399 Chapter4c - 4.3 Confidence Intervals-Using our CLM...

Info iconThis preview shows pages 1–10. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 4.3 Confidence Intervals-Using our CLM assumptions, we can construct CONFIDENCE INTERVALS or CONFIDENCE INTERVAL ESTIMATES of the form: ) ( * j j se t CI =-Given a significance level (which is used to determine t*), we construct 100(1- )% confidence intervals-Given random samples, 100(1- )% of our confidence intervals contain the true value B j-we dont know whether an individual confidence interval contains the true value 4.3 Confidence Intervals-Confidence intervals are similar to 2-tailed tests in that /2 is in each tail when finding t*-if our hypothesis test and confidence interval use the same : 1) we can not reject the null hypothesis (at the given significance level) that B j =a j if a j is within the confidence interval 2) we can reject the null hypothesis (at the given significance level) that B j =a j if a j is not within the confidence interval 4.3 Confidence Example-Going back to our Pepsi example, we now look at geekiness: 43 N 62 . 5 . 3 . 3 . 4 2 21 . 25 . 1 . 2 = = + + = R Pepsi Geek ol o C-From before our 2-sided t* with =0.01 was t*=2.704, therefore our 99% CI is: ] 976 . , 376 . [ ) 25 . ( 704 . 2 3 . ) ( * - = = = CI CI se t CI j j 4.3 Confidence Intervals-Remember that a CI is only as good as the 6 CLM assumptions: 1) Omitted variables cause the estimates (B j hats) to be unreliable-CI is not valid 2) If heteroskedasticity is present, standard error is not a valid estimate of standard deviation-CI is not valid 3) If normality fails, CI MAY not be valid if our sample size is too small 4.4 Complicated Single Tests-In this section we will see how to test a single hypothesis involving more than one B j-Take again our coolness regression: 43 N 62 . 5 . 3 . 3 . 4 2 21 . 25 . 1 . 2 = = + + = R Pepsi Geek ol o C-If we wonder if geekiness has more impact on coolness than Pepsi consumption: 2 1 2 1 : : = a H H 4.4 Complicated Single Tests-This test is similar to our one coefficient tests, but our standard error will be different-We can rewrite our hypotheses for clarity: : : 2 1 2 1- =- a H H-We can reject the null hypothesis if the estimated difference between B 1 hat and B 2 hat is positive enough 4.4 Complicated Single Tests-Our new t statistic becomes: ) ( 2 1 2 1 -- = se t-And our test continues as before: 1) Calculate t 2) Pick and calculate t* 3) Reject if t<t* 4.4 Complicated Standard Errors-The standard error in this test is more complicated than before-If we simply subtract standard errors, we may end up with a negative value-this is theoretically impossible-se must always be positive since it estimates standard deviations 4.4 Complicated Standard Errors4....
View Full Document

Page1 / 32

Econ 399 Chapter4c - 4.3 Confidence Intervals-Using our CLM...

This preview shows document pages 1 - 10. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online