III_GEN_REG_K_2011

III_GEN_REG_K_2011 - III 1 James B. McDonald Brigham Young...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
1 III James B. McDonald Brigham Young University 7/19/2011 III. Classical Normal Linear Regression Model Extended to the Case of k Explanatory Variables A. Basic Concepts Let y denote an n x l vector of random variables, i.e., y = (y 1 , y 2 , . . ., y n )'. 1. The expected value of y is defined by 1 2 n E( ) y E( ) y E(y) = E( ) y 2. The variance of the vector y is defined by 1 1 2 1 n 2 1 2 2 n n 1 n 2 n Var( ) Cov( , ) Cov( , ) y y y y y Cov( , ) Var( ) Cov( , ) y y y y y Var(y) = Cov( , ) Cov( , ) Var( ) y y y y y NOTE : Let μ = E(y), then Var(y) = E[(y - μ)(y - μ)'] 1 1 n n - y = E - y (y 1 - μ 1 , . . ., y n - μ n )
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2 III 2 1 2 1 n 1 2 1 n 1 1 2 2 1 2 n 2 1 2 n 2 2 n E( - )( - ) . . . E( - )( - ) y y y y E( - y) E( - )( - ) . . . E( - )( - ) y y y y E( - . . . = . . . . . . E( y 2 n 1 n 2 1 n 2 n n - )( - ) E( - )( - ) y y y E( - 1 1 2 1 n 2 1 2 2 n n 1 n 2 n Var( ) Cov( , ) ... Cov( , ) y y y y y Cov( , ) Var( ) ... Cov( , ) y y y y y . . . . . . . . . . Cov( , ) Cov( , ) ... Var( ) y y y y y 3. The n x l vector of random variables, y, is said to be distributed as a multivariate normal with mean vector μ and variance covariance matrix (denoted y ~ N(μ, )) if the probability density function of y is given by -1 1 - (y- ) (y- ) 2 n 1 2 2 e f(y; , ) = . (2 | ) | Special case (n = 1): y = (y 1 ), μ = (μ 1 ), = (ζ 2 ). ) ( ) (2 e = ) , ; y f( 2 1 2 2 1 ) - y ( 1 ) - y ( 2 1 - 1 1 1 1 2 1 1 . 2 e = 2 2 ) - y -( 2 2 1 1 4. Some Useful Theorems a. If y ~ N(μ y , y ), then z = Ay ~ N(μ z = Aμ y ; z = A y A') where A is a matrix of constants.
Background image of page 2
3 III b. If y ~ N(0,I) and A is a symmetric idempotent matrix, then y'Ay ~ χ 2 (m) where m = Rank(A) = trace (A). c. If y ~ N(0,I) and L is a k x n matrix of rank k, then Ly and y'Ay are independently distributed if LA = 0. d. If y ~ N(0,I), then the idempotent quadratic forms y'Ay and y'By are independently distributed χ 2 variables if AB = 0. NOTE : (1) Proof of (a) (2) Example: Let y 1 , . . ., y n denote a random sample drawn from N(μ,ζ 2 ). Note that the sample mean can be written as: 1n 1 1 1 1 y = + . .. + = , . . . y Ay yy n n n n where 11 .... A nn . Recall that the "Useful" Theorem 4 implies that 2 ~ [ , '] y Ay N A A I A E(z) = E(Ay) = AE(y) = Aμ y VAR(z) = E[(z - E(z))(z - E(z))'] = E[(Ay - Aμ y )(Ay - Aμ y )'] = E[A(y - μ y )(y - μ y )'A'] = AE[(y - μ y )(y - μ y )']A' = AΣ y A' =Σ z 2 2 1 . . . 0 . . 0 . . . , . . N ~ . . y n y y
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
4 III Verify that (a) = n 1 ,..., n 1 (b) 22 1 n 11 ,..., I = /n nn 1 n ; hence, 2 ~ , / y N n
Background image of page 4
5 III B. The Basic Model Consider the model defined by (1) y t = β 1 x tl + β 2 x t2 + . . . + β k x tk + ε t (t = 1, . . ., n). If we want to include an intercept, define x tl = 1 for all t and we obtain (2) y t = β 1 + β 2 x t2 + . . . + β k x tk + ε t . Note that i can be interpreted as the marginal impact of a unit increase in x i on the expected value of y. The error terms (ε t ) in (1) will be assumed to satisfy: (A.1) ε t distributed normally (A.2) E(ε t ) = 0 for all t (A.3) Var(ε t ) = ζ 2 for all t (A.4) Cov(ε t ε s ) = 0,t s. Rewriting (1) for each t (t = 1, 2, . . ., n) we obtain y 1 = β 1 x 11 + β 2 x 12 + . . . + β k x 1k + ε 1 y 2 = β 1 x 21 + β 2 x 22 + . . . + β k x 2k + ε 2 . . . . . . . . (3) . . . . y n = β 1 x n1 + β 2 x n2 + . . . + β k x nk + ε n . The system of equations (3) is equivalent to the matrix representation y = Xβ + ε
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
6 III where the matrices y, X, β and ε are defined as follows: columns: n observations on k individual variables.
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 02/29/2012 for the course ECON 388 taught by Professor Mcdonald,j during the Winter '08 term at BYU.

Page1 / 48

III_GEN_REG_K_2011 - III 1 James B. McDonald Brigham Young...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online