lecture5 - EC3062 ECONOMETRICS HYPOTHESIS TESTS FOR THE...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: EC3062 ECONOMETRICS HYPOTHESIS TESTS FOR THE CLASSICAL LINEAR MODEL The Normal Distribution and the Sampling Distributions To denote that x is a normally distributed random variable with a mean of E (x) = µ and a dispersion matrix of D(x) = Σ, we shall write x ∼ N (µ, Σ). A standard normal vector z ∼ N (0, I ) has E (x) = 0 and D(x) = I . Any normal vector x ∼ N (µ, Σ) can be standardised: (1) If T is a transformation such that T ΣT = I and T T = Σ−1 , then T (x − µ) ∼ N (0, I ). If z ∼ N (0, I ) is a standard normal vector of n elements, then the sum of squares of its elements has a chi-square distribution of n degrees of freedom; and this is denoted by z z ∼ χ2 (n). With the help of the standardising transformation, it can be shown that, (2) If x ∼ N (µ, Σ) is a vector of order n, then (x − µ) Σ−1 (x − µ) ∼ χ2 (n). 1 EC3062 ECONOMETRICS (3) If u ∼ χ2 (m) and v ∼ χ2 (n) are independent chi-square variates of m and n degrees of freedom respectively, then (u + v ) ∼ χ2 (m + n) is a chi-square variate of m + n degrees of freedom. The ratio of two independent chi-square variates divided by their respective degrees of freedom has a F distribution. Thus, (4) If u ∼ χ2 (m) and v ∼ χ2 (n) are independent chi-square variates, then F = (u/m)/(v/n) has an F distribution of m and n degrees of freedom; and this is denoted by writing F ∼ F (m, n). A t variate is a ratio of a standard normal variate and the root of an independent chi-square variate divided by its degrees of freedom. Thus, (5) If z ∼ N (0, 1) and v ∼ χ2 (n) are independent variates, then t = z/ (v/n) has a t distribution of n degrees of freedom; and this is denoted by writing t ∼ t(n). It is clear that t2 ∼ F (1, n). 2 EC3062 ECONOMETRICS Hypothesis Concerning the Coefficients The OLS estimate β = (X X )−1 X y of β in the model (y ; Xβ, σ 2 I ) has ˆ ˆ E (β ) = β and D(β ) = σ 2 (X X )−1 , Thus, if y ∼ N (Xβ, σ 2 I ), then (6) ˆ β ∼ Nk {β, σ 2 (X X )−1 }. ˆˆ ˆ ˆˆ Likewise, the marginal distributions of β1 , β2 within β = [β1 , β2 ] are given by (7) ˆ β1 ∼ Nk1 β1 , σ 2 {X1 (I − P2 )X1 }−1 , P2 = X2 (X2 X2 )−1 X2 , (8) ˆ β2 ∼ Nk2 β2 , σ 2 {X2 (I − P1 )X2 }−1 , P1 = X1 (X1 X1 )−1 X1 . From the results under (2) to (6), it follows that (9) ˆ ˆ σ −2 (β − β ) X X (β − β ) ∼ χ2 (k ). Similarly, it follows from (7) and (8) that (10) ˆ ˆ σ −2 (β1 − β1 ) X1 (I − P2 )X1 (β1 − β1 ) ∼ χ2 (k1 ), (11) ˆ ˆ σ −2 (β2 − β2 ) X2 (I − P1 )X2 (β2 − β2 ) ∼ χ2 (k2 ). 3 EC3062 ECONOMETRICS ˆ The residual vector e = y − X β has E (e) = 0 and D(e) = σ 2 (I − P ) which is singular. Here, P = X (X X )−1 X and I − P = C1 C1 , where C1 is a T × (T − k ) matrix of T − k orthonormal columns, which are orthogonal to the columns of X such that C1 X = 0. Since C1 C1 = IT −k , it follows that, if y ∼ NT (Xβ, σ 2 I ), then C1 y ∼ NT −k (0, σ 2 I ). Hence (12) ˆ ˆ σ −2 y C1 C1 y = σ −2 (y − X β ) (y − X β ) ∼ χ2 (T − k ). ˆ ˆ Since they have a zero-valued covariance matrix, X β = P y and y −X β = (I − P )y are statistically independent. It follows that (13) ˆ ˆ σ −2 (β − β ) X X (β − β ) ∼ χ2 (k ) and ˆ ˆ σ −2 (y − X β ) (y − X β ) ∼ χ2 (T − k ) are mutually independent chi-square variates. 4 EC3062 ECONOMETRICS From this, it can be deduced that F= (14) = ˆ ˆ (β − β ) X X (β − β ) k ˆ ˆ (y − X β ) (y − X β ) T −k 1ˆ ˆ (β − β ) X X (β − β ) ∼ F (k, T − k ). 2k σ ˆ To test an hypothesis that β = β , the hypothesised value β can be inserted in the above statistic and the resulting value can be compared with the critical values of an F distribution of k and T − k degrees of freedom. If a critical value is exceeded, then the hypothesis is liable to be rejected. The test is based on a measure of the distance between the hypothesised ˆ value Xβ of the systematic component of the regression and the value X β that is suggested by the data. If the two values are remote from each other, then we may suspect that the hypothesis is at fault. 5 EC3062 ECONOMETRICS 1.0 2.0 3.0 4.0 Figure 1. The critical region, at the 10% significance level, of an F (5, 60) statistic. 6 EC3062 ECONOMETRICS To test an hypothesis that β2 = β2 in the model y = X1 β1 + Xβ2 + ε while ignoring β2 , we use 1ˆ ˆ (15) F = 2 (β2 − β2 ) X2 (I − P1 )X2 (β2 − β2 ). σ k2 ˆ This will have an F (k2 , T − k ) distribution, if the hypothesis is true. By specialising the expression under (15), a statistic may be derived for testing the hypothesis that βi = βi , concerning a single element: ˆ (βi − βi )2 , (16) F= 2w σ ii ˆ Here, wii stands for the ith diagonal element of (X X )−1 . If the hypothesis is true, then this will have an F (1, T − k ) distribution. However, the usual way of testing such an hypothesis is to use ˆ βi − βi (17) t= (σ 2 wii ) ˆ in conjunction with the tables of the t(T − k ) distribution. The t statistic shows the direction in which the estimate of βi deviates from the hypothesised value as well as the size of the deviation. 7 EC3062 ECONOMETRICS The Decomposition of a Chi-Square Variate: Cochrane’s Theorem The standard test of an hypothesis regarding the vector β in the model N (y ; Xβ, σ 2 I ) entails a multi-dimensional version of Pythagoras’ Theorem. Consider the decomposition of the vector y into the systematic component and the residual vector. This gives (18) ˆ ˆ y = X β + (y − X β ) and ˆ ˆ y − Xβ = (X β − Xβ ) + (y − X β ), where the second equation comes from subtracting the unknown mean vector Xβ from both sides of the first. In terms of the projector P = X (X X )−1 X , ˆ ˆ there is X β = P y and e = y − X β = (I − P )y . Also, ε = y − Xβ . Therefore, the two equations can be written as (19) y = P y + (I − P )y ε = P ε + (I − P )ε. 8 and EC3062 ECONOMETRICS e y β γ X ^ ˆ Figure 2. The vector P y = X β is formed by the orthogonal projection of the vector y onto the subspace spanned by the columns of the matrix X . Here, γ = Xβ is considered to be the true value of the mean vector. 9 EC3062 ECONOMETRICS From the fact that P = P = P 2 and that P (I − P ) = 0, it follows that (20) ε ε = ε P ε + ε (I − P )ε or, equivalently, ˆ ˆ ˆ ˆ ε ε = (X β − Xβ ) (X β − Xβ ) + (y − X β ) (y − X β ). The usual test of an hypothesis regarding the elements of the vector β is based on these relationships, which are depicted in Figure 2. To test the hypothesis that β is the true value, the hypothesised mean ˆ Xβ is compared with the estimated mean vector X β . The distance that separates the vectors is ˆ ˆ ε P ε = (X β − Xβ ) (X β − Xβ ). (21) This compared with the estimated variance of the disturbance term: (22) ˆ ˆ (y − X β ) (y − X β ) ε (I − P )ε σ= ˆ = , T −k T −k 2 of which the numerator is the squared length of e = (I − P )y = (I − P )ε. 10 EC3062 ECONOMETRICS The arguments of the previous section, demonstrate that (a) ε ε = (y − Xβ ) (y − Xβ ) ∼ σ 2 χ2 (T ), (23) ˆ ˆ (b) ε P ε = (β − β ) X X (β − β ) ∼ σ 2 χ2 (k ), (c) ˆ ˆ ε (I − P )ε = (y − X β ) (y − X β ) ∼ σ 2 χ2 (T − k ), where (b) and (c) represent statistically independent random variables whose sum is the random variable of (a). These quadratic forms, divided by their respective degrees of freedom, find their way into the F statistic of (14) which is (24) F= ε Pε k ε (I − P )ε T −k 11 ∼ F (k, T − k ). EC3062 ECONOMETRICS Cochrane’s Theorem (25) Let ε ∼ N (0, σ 2 IT ) be a random vector of T independently and identically distributed elements. Also, let P = X (X X )−1 X where X is of order T × k with Rank(X ) = k . Then εε ε P ε ε (I − P )ε + = 2 ∼ χ2 (T ), σ2 σ2 σ which is a chi-square variate of T degrees of freedom, represents the sum of two independent chi-square variates ε P ε/σ 2 ∼ χ2 (k ) and ε (I − P )ε/σ 2 ∼ χ2 (T − k ) of k and T − k degrees of freedom respectively. Proof. To find an alternative expression for P = X (X X )−1 X , consider a matrix T such that T (X X )T = I and T T = (X X )−1 . Then, P = XT T X = C1 C1 , where C1 = XT is a T × k matrix comprising k orthonormal vectors such that C1 C1 = Ik is the identity matrix of order k . 12 EC3062 ECONOMETRICS Now define C2 to be a complementary matrix of T − k orthonormal vectors. Then, C = [C1 , C2 ] is an orthonormal matrix of order T such that CC = C1 C1 + C2 C2 = IT (26) CC= C1 C1 C2 C1 C1 C2 C2 C2 = and Ik 0 0 IT −k . The first of these results allows us to set I − P = I − C1 C1 = C2 C2 . Now, if ε ∼ N (0, σ 2 IT ) and if C is an orthonormal matrix such that C C = IT , then it follows that C ε ∼ N (0, σ 2 IT ). On partitioning C ε, we find that (27) C1 ε ∼N C2 ε 0 σ 2 Ik , 0 0 0 σ 2 IT −k , which is to say that C1 ε ∼ N (0, σ 2 Ik ) and C2 ε ∼ N (0, σ 2 IT −k ) are independently distributed normal vectors. 13 EC3062 ECONOMETRICS It follows that (28) ε C1 C1 ε ε Pε = 2 ∼ χ2 (k ) σ2 σ and ε (I − P )ε ε C2 C2 ε = ∼ χ2 (T − k ) σ2 σ2 are independent chi-square variates. Since C1 C1 + C2 C2 = IT , the sum of these two variates is εε ε C1 C1 ε ε C2 C2 ε + = 2 ∼ χ2 (T ); (29) σ2 σ2 σ and thus the theorem is proved. The statistic under (14) can now be expressed in the form of (30) F= ε Pε k ε (I − P )ε . T −k This is manifestly the ratio of two chi-square variates divided by their respective degrees of freedom; and so it has an F distribution with these degrees of freedom. This result provides the means for testing the hypothesis concerning the parameter vector β . 14 EC3062 ECONOMETRICS Hypotheses Concerning Subsets of the Regression Coefficients Consider the restrictions Rβ = r on the regression coefficients of the ˆ model N (y ; Xβ, σ 2 I ), where R is a j × k matrix rank j . Given that β ∼ N {β, σ 2 (X X )−1 }, it follows that (32) ˆ Rβ ∼ N Rβ = r, σ 2 R(X X )−1 R ; and, from this, it can be inferred immediately that (33) ˆ (Rβ − r) R(X X )−1 R σ2 −1 ˆ (Rβ − r) ∼ χ2 (j ). ˆ Since, it is statistically independent of β , (34) ˆ ˆ (y − X β ) (y − X β ) (T − k )ˆ 2 σ = ∼ χ2 (T − k ) σ2 σ2 must be statistically independent of the chi-square variate of (33). 15 EC3062 ECONOMETRICS Therefore, it can be deduced that F= ˆ (Rβ − r) R(X X )−1 R j (36) ˆ (Rβ − r) R(X X )−1 R = σ2 j ˆ −1 −1 ˆ (Rβ − r) ˆ (Rβ − r) ˆ ˆ (y − X β ) (y − X β ) T −k ∼ F (j, T − k ), This F statistic can be used in testing the validity of the hypothesised restrictions Rβ = r. Let β = [β1 , β2 ] . Then, the condition that the subvector β1 assumes the value of β1 can be expressed via the equation (37) [Ik1 , 0] β1 β2 = β1 . This can be construed as a case of the equation Rβ = r, where R = [Ik1 , 0] and r = β1 . 16 EC3062 ECONOMETRICS The partitioned form of (X X )−1 is (X X )−1 = = X1 X1 X1 X2 X2 X1 −1 X2 X2 {X1 (I − P2 )X1 }−1 −1 −{X2 (I − P1 )X2 } − {X1 (I − P2 )X1 }−1 X1 X2 (X2 X2 )−1 −1 X2 X1 (X1 X1 ) −1 {X2 (I − P1 )X2 } . With R = [I, 0], we find that R(X X )−1 R = X1 (I − P2 )X1 (39) −1 . Therefore, for testing the hypothesis that β1 = β1 , we use F= (40) ˆ ˆ (β1 − β1 ) X1 (I − P2 )X1 (β1 − β1 ) k1 ˆ ˆ (y − X β ) (y − X β ) T −k ˆ ˆ (β1 − β1 ) X1 (I − P2 )X1 (β1 − β1 ) = ∼ F (k1 , T − k ). σ 2 k1 ˆ 17 EC3062 ECONOMETRICS ˆ Finally, for the j th element of β , there is ˆ (βj − βj )2 /σ 2 wjj ∼ F (1, T − k ) (41) or, equivalently, ˆ (βj − βj ) σ 2 wjj ∼ t(T − k ), where wjj is the j th diagonal element of (X X )−1 and t(T − k ) denotes the t distribution of T − k degrees of freedom. 18 EC3062 ECONOMETRICS An Alternative Formulation of the F statistic An alternative way of forming the F statistic uses the products of two separate regressions. Consider the formula for the restricted least-squares estimator that has been given under (2.76): (42) ˆ ˆ β ∗ = β − (X X )−1 R {R(X X )−1 R }−1 (Rβ − r). From this, the following expression for the residual sum of squares of the restricted regression is is derived: (43) ˆ ˆ y − Xβ ∗ = (y − X β ) + X (X X )−1 R {R(X X )−1 R }−1 (Rβ − r). The two terms on the RHS are mutually orthogonal on account of the condiˆ tion (y − X β ) X = 0. Therefore, the residual sum of squares of the restricted regression is (44) ˆ ˆ (y − Xβ ∗ ) (y − Xβ ∗ ) = (y − X β ) (y − X β ) + ˆ (Rβ − r) R(X X )−1 R 19 −1 ˆ (Rβ − r). EC3062 ECONOMETRICS This equation can be rewritten as (45) ˆ RSS − U SS = (Rβ − r) R(X X )−1 R −1 ˆ (Rβ − r), where RSS denotes the restricted sum of squares an U SS denotes the unrestricted sum of squares. It follows that the test statistic of (36) can be written as (46) F= RSS − U SS j U SS T −k . This formulation can be used, for example, in testing the restriction that β1 = 0 in the partitioned model N (y ; X1 β1 + X2 β2 , σ 2 I ). Then, in terms of equation (37), there is R = [Ik1 , 0] and there is r = β1 = 0, which gives (47) ˆ ˆ RSS − U SS = β1 X1 (I − P2 )X1 β1 = y (I − P2 )X1 {X1 (I − P2 )X1 }−1 X1 (I − P2 )y. 20 EC3062 ECONOMETRICS On the other hand, there is (48) RSS − U SS = y (I − P2 )y − y (I − P )y = y (P − P2 )y, Since the two expressions must be identical for all values of y , the comparison of (36) and (37) is sufficient to establish the following identity: (49) (I − P2 )X1 {X1 (I − P2 )X1 }−1 X1 (I − P2 ) = P − P2 . It can be understood, in reference to Figure 3, that the square of the distance between the restricted estimate Xβ ∗ and the unrestricted estimate ˆ ˆ X β , denoted by X β − Xβ ∗ 2 , which is the basis of the original formulation of the test statistic, is equal to the restricted sum of squares y − Xβ ∗ 2 less ˆ the unrestricted sum of squares y − X β 2 . The latter is the basis of the alternative formulation. 21 EC3062 ECONOMETRICS y ^ Xβ Xβ * Figure 3. The test of the hypothesis entailed by the restricted model is based on a measure of the proximity of the restricted estimate Xβ ∗ , ˆ and the unrestricted estimate X β . The U SS is the squared distance ˆ y − X β 2 . The RSS is the squared distance y − Xβ ∗ 2 . 22 ...
View Full Document

This note was uploaded on 03/02/2012 for the course EC 3062 taught by Professor D.s.g.pollock during the Spring '12 term at Queen Mary, University of London.

Ask a homework question - tutors are online