This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Review E ( b 1 ) = β 1 V ar ( b 1 ) = σ 2 S XX E ( b ) = β V ar ( b ) = σ 2 parenleftBig 1 n + ¯ x 2 S XX parenrightBig b 1 and b are maximum likelihood estimators (i.e. they are ˆ β 1 and ˆ β ) Lecture 3 – p. 1/18 Statistical inference for β 1 Although we know the expectation and variance of our estimators, b * and b 1 , we also need distributions for the estimators to make inference Usually we make inference through hypothesis tests and by building confidence intervals In order to do this exactly, we’ll need to assume that the data are normally distributed, usually accomplished by assuming that ǫ i ∼ N (0 ,σ 2 ) Lecture 3 – p. 2 / 1 t test for β 1 and β Can test using our estimator ˆ β 1 = b 1 and properties of it Remember ˆ β 1 = b 1 = S XY S XX = ∑ n i =1 y i ( x i ¯ x ) ∑ n i =1 ( x i ¯ x ) 2 The ǫ i are independent normals, so y i are independent normals The x i ’s are constant Therefore, ˆ β 1 is a weighted sum of normal random variables (and so is also normal) Lecture 3 – p. 3/18 t test for β 1 and β Know the expected value ( β 1 ) and variance ( σ 2 /S XX ) of ˆ β 1 , so ˆ β 1 ∼ Normal ( β 1 ,σ 2 /S XX ) Let SS Res = ∑ n i =1 ( y i ˆ y i ) 2 = ∑ n i =1 ( y i ( b * + ( x i ¯ x ) b 1 )) 2 Can show that SS Res /σ...
View
Full
Document
This note was uploaded on 01/15/2010 for the course MATH 423 taught by Professor Steele during the Spring '06 term at McGill.
 Spring '06
 STEELE
 Variance

Click to edit the document details