{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

102c_lecture3

# 102c_lecture3 - Law of iterated expectations(LIE This law...

This preview shows pages 1–5. Sign up to view the full content.

1 Law of iterated expectations (LIE) This “law” states that ( ) ( ) x y E E y E x | = where E x means that the expectation is calculated with respect to the distribution of x . In other words ( ) ( ) ( ) = b a dx x f x y E y E | where a and b are the lower and upper support of the distribution of x , respectively, and f(x) is the p.d.f. of x. To prove this, note that ( ) x y E | is generally a function of x. For example, in the linear regression model, ( ) x x y E 1 0 | β β + = . Call this function ( ) ( ) x y E x g | = . Its mean is ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) y E dydx y x yf dydx x f x y yf dx x f dy x y yf dx x f x y E dx x f x g x g E x y E E x x = = = = = = = ∫ ∫ ∫ ∫ ∫ ∫ , | | | | Because E(y|x)=g(x) By definition of cond. expectation (Exchanging inside integral) Because ( ) ( ) ( ) x f y x f x y f / , | = Because ( ) ( ) y f dx y x f = , In fact, the LIE is more general than that, and it states that ( ) ( ) ( ) ( ) x y h E E y h E x | = One example is h(y)=y 2 . One can prove that ( ) ( ) x y E E y E x | 2 2 =

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
2 Variance decomposition The variance decomposition property allows us to write ( ) ( ) ( ) ( ) ( ) x y E x y E y x x | var | var var + = The proof is fairly simple and uses LIE and the fact that ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 2 2 2 2 2 2 2 2 2 var x E x E x E x E x E x E x E x xE x E x E x E x = + = + = = Let’s write ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) x y E y x y E E x y E E y y E x y E E y E y E x y E E x y E E x y E x y E E x y E x x y E x x x y x applyLIE x x x x | var var | | var | | | | | | var | var 2 2 2 2 var 2 2 2 2 2 2 = + = + = = = O O O O O N O O O O O M L O O N O O M L ON OM L By application of LIE Adding and subtracting E(y) 2 An important corollary is for the covariance: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) x y E x x y E E x E x y xE E x y E E x E x xy E E y E x E xy E y x x x x x LIE x x LIE x | , cov | | | | , cov = = = = ON OM L O ON O OM L The reason why this is important is the following. In the linear regression model, when X is non-stochastic, we make the assumption ( ) 0 | = X u E Now let’s compute the covariance between u and X. It’s
3 ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 0 0 0 0 ' 0 ' | ' | ' ' ' , cov 0 0 = = = = = = = X X X X E X E X E X u E E X E X u E X E u E X E u X E X u ±² ±³ ´ ±² ±³ ´ So the assumption ( ) 0 | = X u E also implies ( ) 0 , cov = X u , and ( ) 0 ' = u X E . We will find the latter very useful later on.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
1 Large sample distribution theory One of the assumptions in the linear statistical model y=X β +u is that X is a (n × k) non-stochastic matrix. The idea is that X remains fixed in repeated samples, i.e., if we have two samples of size n each, the values of the matrix X will remain constant across samples while the values of y will change. For example, suppose we are regressing hourly wages (y) onto a constant, age, and gender. Then if n=6 for example, we could have something like y’=(12, 11, 9, 8, 12, 8) = 0 27 1 0 26 1 0 25 1 1 27 1 1 26 1 1 25 1 X in our first sample, and something like y’=(11, 7, 12, 12, 9, 9) = 0 27 1 0 26 1 0 25 1 1 27 1 1 26 1 1 25 1 X in our second sample. You can see that the two samples feature a different y vector, but
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 20

102c_lecture3 - Law of iterated expectations(LIE This law...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online