102c_lecture8

102c_lecture8 - HYPOTHESIS TESTING 1 Linear Hypotheses...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
HYPOTHESIS TESTING 1L i n e a r H y p o t h e s e s Suppose you want to test a set of linear hypotheses on the vector of parameters β from the usual linear model y = + u under the assumption that OLS is unbiased and consistent. The way this is written in matrix algebra is H 0 : = θ The dimension of R is ( q × k ) . The dimension of θ is ( q × 1) . q is the number of restrictions that we want to test. For example, suppose we want to test the hypothesis H 0 : β 1 =0 .Th en R = ¡ 10 ... 0 ¢ , θ Consider for example testing H 0 : β 1 β 2 . We still have just one restriction, so R = ¡ 1 1 ... 0 ¢ , θ Consider for example testing H 0 : β 1 β 2 , β 3 = 2 , β 4 + β 5 =1 6 = 1 (assume k =6 ). We now have four restrictions, so R = 1 10000 0 0 1000 0 0 0110 0 0 0001 = 0 2 1 1 A classical test is that all the coe cients of a linear model are zero but the constant, i.e. H 0 : β 2 = β 3 = ... = β k .Inth i sca se R = h 0 ( k 1) × 1 I ( k 1) × ( k 1) i = μ 0 ( k 1) × 1 Since we want to test the null that H 0 : θ , the test statistic will be based (using the analogy criterion) on the sample analog R b β θ eid ea is that if R b β θ is signi f cantly close to 0, then the null is not rejected. We just have to de f ne what it means to be "statistically close to zero". The distribution of the test statistic will be computed under the null hy- pothesis that θ . Then: E ³ R b β θ ´ = RE ³ b β ´ θ = θ var ³ R b β θ ´ = var ³ R b β ´ = Rvar ³ b β ´ R 0 = σ 2 R ( X 0 X ) 1 R 0 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
As for the distribution that θ follows, assume u N ¡ 0 2 I ¢ .S in ce b β =( X 0 X ) 1 X 0 y X 0 X ) 1 X 0 ( + u )= β +( X 0 X ) 1 X 0 u is a linear transformation of u , a normal r.v., then also b β is a normal r.v. . Since R b β θ is a linear transformation of b β , a normal r.v., it will also be a normal r.v. Hence ³ R b β θ ´ N ³ 0 2 R ( X 0 X ) 1 R 0 ´ At this point, let’s recall that if X N (0 , Σ ) , then X 0 Σ 1 X χ 2 rank ( Σ ) . Applying this to our case: ³ R b β θ ´ 0 h R ( X 0 X ) 1 R 0 i 1 ³ R b β θ ´ σ 2 χ 2 q where q is the number of restrictions. However, this is not usable because σ 2 is unknown. We need to f nd a way to get rid of it. Recall then that b u 0 b u σ 2 χ 2 n k and that the ratio of two independent χ 2 r.v.s with degrees of freedom p and r follows a F p,r distribution. Applying this result to our case gives ( R b β θ ) 0 h R ( X 0 X ) 1 R 0 i 1 ( R b β θ ) 2 b u 0 b u ( n k ) σ 2 F q,n k and simplifying, F = ½ ³ R b β θ ´ 0 h R ( X 0 X ) 1 R 0 i 1 ³ R b β θ ´ ¾ /q ( b u 0 b u ) / ( n k ) F q,n k High values of this statistic will give credence against the null hypothesis.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 07/28/2011 for the course ECON 102C taught by Professor Pistaferri,l during the Spring '11 term at Stanford.

Page1 / 17

102c_lecture8 - HYPOTHESIS TESTING 1 Linear Hypotheses...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online