This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Ru2 = unrestricted model, Rr2 = restricted model fit. F = { (Ru2 – Rr2)/J } / [(1 – Ru2)/(nK)] = F [J,nK]. (2) Is Rb  q close to ? Basing the test on the discrepancy vector: m = Rb  q. Using the Wald criterion : m (Var[ m ])1 m has a chisquared distribution with J degrees of freedom But, Var[ m ] = R [2( X’X )1] R . If we use our estimate of 2, we get an F[J,nK], instead. (Note, this is based on using ee /( n K ) to estimate 2.) These are the same for the linear model ˜˜™™™™ ™ 16/50 Part 8: Hypothesis Testing Testing Fundamentals  I p SIZE of a test = Probability it will incorrectly reject a “true” null hypothesis. p This is the probability of a Type I error. Under the null hypothesis, F(3,100) has an F distribution with (3,100) degrees of freedom. Even if the null is true, F will be larger than the 5% critical value of 2.7 about 5% of the time. ˜˜™™™™ ™ 17/50 Part 8: Hypothesis Testing Testing Procedures How to determine if the statistic is 'large.' Need a 'null distribution.' If the hypothesis is true, then the statistic will have a certain distribution. This tells you how likely certain values are, and in particular, if the hypothesis is true, then 'large values' will be unlikely. If the observed statistic is too large, conclude that the assumed distribution must be incorrect and the hypothesis should be rejected. For the linear regression model with normally distributed disturbances, the distribution of the relevant statistic is F with J and nK degrees of freedom. ˜˜˜™™™™ ™ 18/50 Part 8: Hypothesis Testing Distribution Under the Null Density of F[3,100] X .250 .500 .750 .000 1 2 3 4 FDENSITY ˜˜˜™™™ ™ 19/50 Part 8: Hypothesis Testing A Simulation Experiment sample ; 1  100 $ matrix ; fvalues=init(1000,1,0)$ proc$ create ; fakelogc = rnn(.319557,1.54236)$ Coefficients all = 0 regress ; quietly ; lhs = fakelogc Compute regression ; rhs=one,logq,logpl_pf,logpk_pf$ calc ; fstat = (rsqrd/3)/((1rsqrd)/(n4))$ Compute F matrix ; fvalues(i)=fstat$ Save 1000 Fs endproc execute ; i= 1,1000 $ 1000 replications histogram ; rhs = fvalues ; title=F Statistic for H0:b2=b3=b4=0$ ˜˜˜™™™ ™ 20/50 Part 8: Hypothesis Testing Simulation Results 48 outcomes to the right of 2.7 in this run of the experiment. ˜˜˜™™™ ™ 21/50 Part 8: Hypothesis Testing Testing Fundamentals  II p POWER of a test = the probability that it will correctly reject a “false null” hypothesis p This is 1 – the probability of a Type II error. p The power of a test depends on the specific alternative. ˜˜˜™™™ ™ 22/50 Part 8: Hypothesis Testing Power of a Test Null: Mean = 0. Reject if observed mean < 1.96 or > +1.96....
View
Full Document
 Fall '10
 H.Bierens
 Econometrics, Regression Analysis, Hypothesis testing, Statistical hypothesis testing

Click to edit the document details