lecturenotes3 - 1 Economics 620, Lecture 3: Simple...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1 Economics 620, Lecture 3: Simple Regression II ^ and ^ are the LS estimators yi = ^ + ^ xi are the estimated values ^ The Correlation Coe cient: (xi x)(yi y ) r = qP : P ( xi x) 2 ( y i y ) 2 R2 = (squared) correlation between y and y ^ P Note: y is a linear function of x. ^ So corr(y; y ) = jcorr(y; x)j: ^ Prof. N. M. Kiefer, Econ 620, Cornell University, Lecture 3. Copyright (c) N. M. Kiefer. 2 Correlation Proposition: 1<r<1 P ( (xi x)(yi y))2 r 2 = P(x x)2 P(y y)2 : i i Use Cauchy-Schwartz P ( xi y i ) 2 P 2P 2 xi y i ) r2 1) 1 r 1 Proposition: Proof: and r have the same sign. ^ = P(xi ( xi P Prof. N. M. Kiefer, Econ 620, Cornell University, Lecture 3. (yi x) y i = r qP 2 x) ( xi qP y )2 x) 2 Copyright (c) N. M. Kiefer. 3 Correlation cont' d. P 2 P ei = (yi y )2 ^ 2 P(xi x) 2 SSR = TSS - SS explained by x Proposition: r2 = 1 SSR =1 T SS P (yi P 2 ei y )2 Proof: P ( yi P 2 ei =1 y )2 ^ 2 P(xi (yi P P x) 2 =1 y )2 r2 ) r2 = 1 (yi P 2 ei y )2 Copyright (c) N. M. Kiefer. Prof. N. M. Kiefer, Econ 620, Cornell University, Lecture 3. 4 Warning: Correlation 6= Dependence Variables are completely dependent, correlation is zero. Correlation is a measure of linear dependence. Prof. N. M. Kiefer, Econ 620, Cornell University, Lecture 3. Copyright (c) N. M. Kiefer. 5 The Likelihood Function A complete speci...cation of the model Conditional distribution of observables Conditional on regressors x "exogenous variables" - variables determined outside the model Conditional on parameters P (yjx; ; ; 2) Previously, speci...ed only mean and maybe variance Incompletely speci...ed = "semiparametric" Point estimate: MLE intuition Details, asy. justi...cation lecture 9. Prof. N. M. Kiefer, Econ 620, Cornell University, Lecture 3. Copyright (c) N. M. Kiefer. 6 Maximum Likelihood Estimators Assumptions: Normality p(yjx) = N ( + x; 2) 1 1 y = p exp 2 2 x 2 ! Likelihood Function: Qn L( = i=1 (p(yijxi) 1 Pn (y = (2 2)( n=2) exp 2 i=1 i 2 ; ; 2) xi)2 The maximum likelihood (ML) estimators maximize L. The log likelihood function is `( ; ; 2 ) = n n 1 Pn ln(2 ) ln 2 (y 2 i=1 i 2 2 2 xi)2 Prof. N. M. Kiefer, Econ 620, Cornell University, Lecture 3. Copyright (c) N. M. Kiefer. 7 Maximum Likelihood cont' d. Proposition: The LS estimators are also the ML estimators. What is the maximum in 2? 2 ML = Pn i 1 (yi ^ ^ xi)2=n Why? @` @ 2 = n 2 2 + 21 4 1P ) 2 L = n n ( yi i=1 M Pn i=1 (yi xi)2 ^ ^ xi)2 is this a maximum in ? @ 2` @( 2 )2 = 2n4 1 P(y i 6 xi)2 = 2 n < 0 4 Prof. N. M. Kiefer, Econ 620, Cornell University, Lecture 3. Copyright (c) N. M. Kiefer. 8 Distribution of Estimators These are linear combinations of normal random variables, hence they are normal. The means and variances have already been obtained: Distribution of s and 2) independent normal random variables with means zero and variances 2. Proposition: s2 is unbiased and V s2 = 2 4=(n Proof : Note that (n 2) 2). P 2 Fact: e can be written as a sum of squares of (n 2)s2= 2 is distributed as 2(n Prof. N. M. Kiefer, Econ 620, Cornell University, Lecture 3. Copyright (c) N. M. Kiefer. 9 More Distributions ) E (s2= 2)(n 2) = (n 2) ) E (s2) = 2 ) V (s2= 2)(n 2) = 2(n 2) so V (s2) = 2 4=(n 2) Proposition: s2 has higher variance than 2 L M Proof : Note that 2 (n n 2 L M 2 is distributed as 2) = 2 (n )E 2 L M )V n 2 L M 2 2) n = 2(n 1=(n 2) 2) ) V ( 2 2 ) ML = 2 4 (n 2) n2 V (s2 ) ) V ( 2 L) M = (n 2)n2 = (nn 2)2 > 1 Copyright (c) N. M. Kiefer. Prof. N. M. Kiefer, Econ 620, Cornell University, Lecture 3. 10 Inference ^ ; 2 ) where 2 N( De...nition: A 95% con...dence interval for ^ is given by ( ^ z0:025 ) where z is standard normal. Problem: The variance is unknown. 2 (k ) and they are Fact: If z n(0; 1) and v independent, then t = pz is distributed as t(k). v=k = P(x x)2 ) i 2 ^ n(0; 1) Proposition: pP ^ s= (xi x)2 t(n 2) Prof. N. M. Kiefer, Econ 620, Cornell University, Lecture 3. Copyright (c) N. M. Kiefer. 11 Proof: (^ ) pP 2 (xi x)2 n(0; 1) 2 (n s 2 (n (^ 2) 2) ) pP s= (xi x)2 = s= Independence? E(^ p( P ^ ) x)2 (xi t(n 2) )ej = E [( ^ = E [( ^ ( = [( ^ = ( xj +E [( ^ = P 2 (x j )(ej )(( ^) ( )( ( ^ x)E [( ^ e)] ^ )xj + "j ^) + ( ^ )x ")] )(xj )2 ] ")] P x) + ("j "))] )("j ( xi x) ("j ") (xi x)"i +E P x) 2 ( xi x) 2 Copyright (c) N. M. Kiefer. Prof. N. M. Kiefer, Econ 620, Cornell University, Lecture 3. 12 Continuation of independence argument P ("j ") (xi x)"i P E (xi x)2 P "P (xi x)"i E (x x)2 : i = Thus, E(^ P " (x x)" E P(x i x)2 i = 0: i P j x) (xi x)2 2 (x )ej = 0: Prof. N. M. Kiefer, Econ 620, Cornell University, Lecture 3. Copyright (c) N. M. Kiefer. 13 Violations of Assumptions I. Eyi = + xi II. V (yijxi) = V ("i) = 2 The alternative is eroskedasticity ). 2 i dierent across observations (het- Is the LS estimator unbiased? Is it BLUE? If the i are known we can run the ` transformed' regression, and will get best linear unbiased estimates and correct standard errors. wi = 1= i, let wiyi = wi + xiwi + "iwi. Ewiyi = wi + xiwi and V (wiyi) = V ("iwi) = 1 The Gauss=Markov Theorem tells that LS is BLUE in the transformed model. Prof. N. M. Kiefer, Econ 620, Cornell University, Lecture 3. Copyright (c) N. M. Kiefer. 14 Heteroskedasticity continued The LS estimator in the transformed model is P (x ^ w = P iwi xw)wiyi 6= ^ (xiwi xw)2 with Note: The variance of w is less than the variance of : "Heteroskedasticity Consistent" standard errors: "P # P 2 2 "2 (x x) i (x x)" V ( ^ ) = E P(xi x)2i = E P i i ( (xi x)2)2 P 2 2 (x ^ ) = P i x) i V( ( (xi x)2)2 insert e for " and remove the expectation. Prof. N. M. Kiefer, Econ 620, Cornell University, Lecture 3. Copyright (c) N. M. Kiefer. 15 More on Heteroskedasticity P 2 Essentially this works because ei =n is a reasonable ^ P 2 estimator for ^2 i =n, although of course, ei is not a good estimator for 2: i Testing for heteroskedasticity : Split the sample; regress e2 on stu III. E"i"j = 0 The alternative is E"i"j 6= 0 Is the LS estimator unbiased? Is it BLUE? Testing for correlated errors: We need a hypothesis about the correlation. Prof. N. M. Kiefer, Econ 620, Cornell University, Lecture 3. Copyright (c) N. M. Kiefer. 16 More (last) on violations of assumptions IV. Normality E (yijxi) = N (0; 2) + xi; V (yijxi) = 2 but "i f (") 6= The usual suspect is a heavy-tailed distribution. Is the LS estimator unbiased? Is it BLUE/ Example: f (") = 21 exp ( j"= j) The variance of the ML estimator is half that of the LS estimator asymptotically. The minimum absolute deviation (MAD) estimator works. It is a robust estimator. Prof. N. M. Kiefer, Econ 620, Cornell University, Lecture 3. Copyright (c) N. M. Kiefer. ...
View Full Document

This note was uploaded on 12/08/2007 for the course ECON 6200 taught by Professor Kiefer during the Spring '07 term at Cornell University (Engineering School).

Ask a homework question - tutors are online