This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 4.3 Confidence IntervalsUsing our CLM assumptions, we can construct CONFIDENCE INTERVALS or CONFIDENCE INTERVAL ESTIMATES of the form: ) ˆ ( * ˆ j j se t CI β β ± =Given a significance level α (which is used to determine t*), we construct 100(1 α )% confidence intervalsGiven random samples, 100(1 α )% of our confidence intervals contain the true value B jwe don’t know whether an individual confidence interval contains the true value 4.3 Confidence IntervalsConfidence intervals are similar to 2tailed tests in that α /2 is in each tail when finding t*if our hypothesis test and confidence interval use the same α : 1) we can not reject the null hypothesis (at the given significance level) that B j =a j if a j is within the confidence interval 2) we can reject the null hypothesis (at the given significance level) that B j =a j if a j is not within the confidence interval 4.3 Confidence ExampleGoing back to our Pepsi example, we now look at geekiness: 43 N 62 . 5 . 3 . 3 . 4 ˆ 2 21 . 25 . 1 . 2 = = + + = R Pepsi Geek ol o CFrom before our 2sided t* with α =0.01 was t*=2.704, therefore our 99% CI is: ] 976 . , 376 . [ ) 25 . ( 704 . 2 3 . ) ˆ ( * ˆ = ± = ± = CI CI se t CI j j β β 4.3 Confidence IntervalsRemember that a CI is only as good as the 6 CLM assumptions: 1) Omitted variables cause the estimates (B j hats) to be unreliableCI is not valid 2) If heteroskedasticity is present, standard error is not a valid estimate of standard deviationCI is not valid 3) If normality fails, CI MAY not be valid if our sample size is too small 4.4 Complicated Single TestsIn this section we will see how to test a single hypothesis involving more than one B jTake again our coolness regression: 43 N 62 . 5 . 3 . 3 . 4 ˆ 2 21 . 25 . 1 . 2 = = + + = R Pepsi Geek ol o CIf we wonder if geekiness has more impact on coolness than Pepsi consumption: 2 1 2 1 : : β β β β = a H H 4.4 Complicated Single TestsThis test is similar to our one coefficient tests, but our standard error will be differentWe can rewrite our hypotheses for clarity: : : 2 1 2 1 = β β β β a H HWe can reject the null hypothesis if the estimated difference between B 1 hat and B 2 hat is positive enough 4.4 Complicated Single TestsOur new t statistic becomes: ) ˆ ˆ ( ˆ ˆ 2 1 2 1 β β β β = se tAnd our test continues as before: 1) Calculate t 2) Pick α and calculate t* 3) Reject if t<t* 4.4 Complicated Standard ErrorsThe standard error in this test is more complicated than beforeIf we simply subtract standard errors, we may end up with a negative valuethis is theoretically impossiblese must always be positive since it estimates standard deviations 4.4 Complicated Standard Errors4....
View
Full
Document
This note was uploaded on 03/14/2009 for the course ECON ECON 399 taught by Professor Priemaza during the Spring '09 term at University of Alberta.
 Spring '09
 Priemaza
 Econometrics

Click to edit the document details