This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Problems with the IV estimator So far, we have only discussed the merits of the IV estimator. We will now also discuss its weaknesses. 1 Problems with the IV estimator 1.1 High standard errors One of the principal weaknesses of the IV estimator is that it tends to display high standard errors relative to OLS. This may hamper inference. The reason why this is so it&s easy to understand. Consider the exactly identied case, where the model is y = X& + u with E ( u u ) = 2 u I . The IV and OLS estimators are ^ & IV = ( Z X ) & 1 Z y ^ & OLS = ( X X ) & 1 X y and their (asymptotic) variances: Avar & ^ & IV = 2 u ( Z X ) & 1 Z Z ( X Z ) & 1 Avar & ^ & OLS = 2 u ( X X ) & 1 Suppose for simplicity that Z and X are mean-zero vectors (i.e., we have a single regressor and a single instrument). Thus Z X = X Z = n X i =1 z i x i = ncov ( z;x ) Z Z = n X i =1 z 2 i = nvar ( z ) X X = n X i =1 x 2 i = nvar ( x ) and: Avar & ^ & OLS = 2 u ( X X ) & 1 = 2 u n 1 var ( x ) 1 Avar & ^ & IV = 2 u ( Z X ) & 1 Z Z ( X Z ) & 1 = 2 u n var ( z ) cov ( x;z ) 2 = 2 u n 1 var ( x ) var ( x ) var ( z ) cov ( x;z ) 2 = 2 u n 1 var ( x ) 1 2 xz = Avar & ^ & OLS 1 2 xz where xz is the correlation coe/ciient between x and z . Since & 2 xz & 1 , it follows that Avar & ^ & IV Avar & ^ & OLS . Note that when we have instruments with low power, 2 xz ! and Avar & ^ & IV ! 1 . Thus an indication of low power of instruments is e/ectively high standard errors of the IV estimates. 1.2 Finite sample properties Under some assumptions, IV is consistent and OLS is not. We have not said anything about whether IV is biased or unbiased. The reason is that the IV estimator is a strange random variable. Under some circumstances, it does not even have an expectation, which means that the concept of bias (which requires determining the expectation of the estimator) is meaningless. In particular, Kiviet has shown that the number of moments of the IV estimator is equal to q , the number of overidentifying restrictions. If the model is exactly identi&ed ( q = 0 ) the IV estimator has no moments, not even the &rst (the expectation). Here we will show that even when E & ^ & IV exists, the IV estimator is biased in small samples. Consider again our model y = X& + u X = Z + v We are going to assume that the instrument is valid, ie E ( Z u ) = E ( Z v ) = 0 . Further, we assume that E ( u v ) = uv is the reason why OLS is biased (think of the schooling example: ability is both in u and v ). The expression for ^ & IV in the overidenti&ed case ^ & IV = ( X P Z X ) & 1 X P z y = & + ( X P Z X ) & 1 X P z u = & + ( X P Z X ) & 1 ( Z + v ) P z u = & + ( X P Z X ) & 1 Z u + ( X P Z X ) & 1 v P z u because Z P Z = Z Z ( Z Z ) & 1 Z = Z (and P Z Z = Z )....
View Full Document
- Spring '11