LinearRegression3

6 recall the six assumptions mlr1 mlr6 are called the

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ot only BLUE, but also the minimum variance unbiased estimator o This means that OLS has the smallest variance amongst all unbiased estimators (not just the linear ones) While we assume normality here, sometimes that is not the case in practice o One way we get around that is to rely on the CLT when we are dealing with large samples o We can then drop the normality assumption VER. 9/25/2012. © P. KOLM 40 Outline of the Statistical Inference Lecture 1. Tests of a single linear restriction (t-tests), e.g. b j = b j0 2. Tests of multiple linear restrictions (F-tests), e.g. bi = b j = 0 3. Tests of linear combinations (t-tests, also possible using 2. above) e.g. bi + b j = b 0 VER. 9/25/2012. © P. KOLM 41 Normal Sampling Distributions Under the CLM assumptions, conditional on the sample values of the independent variables4 ˆ ˆ bj N (bj ,Var (bj )) where ˆ Var (b j ) = s2 ( SSTj 1 - Rj2 ) This means that VER. 9/25/2012. © P. KOLM ˆ (b j - bj ) ˆ std (b j ) N (0,1) 42 The t-Test ˆ We can replace std (bj ) in ˆ (b j ˆ with se(b j ) = - bj ) ˆ std (b j ) N (0,1) s ( SSTj 1 - Rj2 ) Then, under the CLM assumptions we have that ˆ (b tº j - bj ) ˆ se (b j ) tn -k -1 We refer to t as the “t-statistic” VER. 9/25/2012. © P. KOLM 43 Remarks Note this is a t -distribution with n - k - 1 degrees of freedom (v.s. 2 2 normal) because we have to estimate s by s Facts: The t -distribution . . . o Looks like the standard normal except it has fatter tails o Is a family of distributions characterized by degrees of freedom o Gets more like the standard normal as degrees of freedom increase o Is pretty much indistinguishable from a standard normal when df > 120 VER. 9/25/2012. © P. KOLM 44 How to Conduct the Test 1. Start with a null hypothesis. For example, H 0 : bj = 0 ˆ We only reject if b j is “sufficiently far” from zero: If we want to have only a 5% probability of rejecting H 0 , if it is really true, that is P (reject H 0 | H 0 true) = 0.05 then we say our significance level is a = 0.05 Significance levels usually chosen to be 1%, 5% or 10% The exact rule on how to perform the test depends on the alternative hypothesis If this null is true… o Then x j has no effect on y , controlling for other x ’s o Then x j should be excluded from the model (efficiency argument, extraneous regressor) → We look at (1) one-sided and (2) two-sided alternatives, one at a time VER. 9/25/2012. © P. KOLM 45 2. Besides the null, H 0 , we need an alternative hypothesis, H 1 , and a significance level H 1 may be one-sided, or two-sided o H 1 : b j > 0 and H 1 : b j < 0 are one-sided o H 1 : b j ¹ 0 is a two-sided alternative VER. 9/25/2012. © P. KOLM 46 The One-Sided Alternative: b j > 0 (1/2) Consider the alternative H 1 : b j > 0 Having picked a significance level, a , we determine the (1 - a)th percentile in a t -distribution with n - k - 1 df and call this c , the critical value Rejection rule: o t > c : Reject the null hypothesis in favor of the alternative hypothesis if the observed t -statistic is greater than the critical value o t £ c : If the t -statistic is less than or equal to the critical value then we do not reject the null VER. 9/25/2012. © P. KOLM 47 The One-Sided Alternative: b j > 0 (2/2) Model: yi = b0 + b1x i 1 +¼+bk xik + ui Hypothesis: H 0 : bj = 0 H 1 : bj > 0 1-a Reject Fail to reject a 0 VER. 9/25/2012. © P. KOLM c 4...
View Full Document

This document was uploaded on 02/17/2014 for the course COURANT G63.2751.0 at NYU.

Ask a homework question - tutors are online