ch02Em - Chapter 2. Inference in Regression Analysis Math...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Chapter 2. Inference in Regression Analysis Math Stat Result: If Y i N ( i , 2 i ) and Y i s are independent, and a 1 ,a 2 ,...,a n are known constants then n summationdisplay i =1 a i Y i N parenleftBigg n summationdisplay i =1 a i i , n summationdisplay i =1 a 2 i 2 i parenrightBigg . Thus, a linear combination of independent normal random variables is itself a normal random variable. Theorem: b and b 1 are linear combinations of the Y i s. That is, we can write b 1 = n summationdisplay i =1 k i Y i and b = n summationdisplay i =1 l i Y i where k 1 ,...,k n and l 1 ,...,l n are known constants. Proof: Recall S XX = n i =1 ( X i X ) 2 . So b 1 = 1 S XX n summationdisplay i =1 ( X i X )( Y i Y ) = 1 S XX bracketleftBigg n summationdisplay i =1 ( X i X ) Y i Y n summationdisplay i =1 ( X i X ) bracketrightBigg = 1 S XX n summationdisplay i =1 ( X i X ) Y i = n summationdisplay i =1 parenleftbigg X i X S XX parenrightbigg Y i = n summationdisplay i =1 k i Y i with k i = X i X S XX b = Y b 1 X = 1 n n summationdisplay i =1 Y i X n summationdisplay i =1 k i Y i = n summationdisplay i =1 parenleftbigg 1 n k i X parenrightbigg Y i = n summationdisplay i =1 l i Y i with l i = 1 n k i X. Thus, b and b 1 are linear combinations of the Y i s and, hence, they are normal variates. What about their means and variances? Theorem: Under SLR model with normal errors: b 1 N parenleftbigg 1 , 2 S XX parenrightbigg and b N parenleftbigg , 2 n i X 2 i S XX parenrightbigg . We are first interested in i k i , i k i X i and i k 2 i . n summationdisplay i =1 k i = n summationdisplay i =1 X i X S XX = 1 S XX n summationdisplay i =1 ( X i X ) = 0 n summationdisplay i =1 k i X i = n summationdisplay i =1 X i X S XX X i = 1 S XX S XX = 1 n summationdisplay i =1 k 2 i = 1 S 2 XX n summationdisplay i =1 ( X i X ) 2 = 1 S XX . Proof: Since b 1 = n i =1 k i Y i , we get E ( b 1 ) = n summationdisplay i =1 k i E ( Y i ) = n summationdisplay i =1 k i ( + 1 X i ) . Because i k i = 0 and i k i X i = 1, this is E ( b 1 ) = n summationdisplay i =1 k i + 1 n summationdisplay i =1 k i X i = 1 . Therefore, 1 is an unbiased estimator of b 1 . With i k 2 i = 1 /S XX , we get V ar ( b 1 ) = V ar parenleftBigg n summationdisplay i =1 k i Y i parenrightBigg = n summationdisplay i =1 k 2 i V ar ( Y i ) = 2 n summationdisplay i =1 k 2 i = 2 S XX . Showing b N parenleftBig , 2 n i X 2 i S XX parenrightBig is basically the same. Example: 93 house prices in Gville sold Dec. 1995 (Good example for the project. Look up http://www.fsboingainesville.com/ for more and current data)....
View Full Document

Page1 / 32

ch02Em - Chapter 2. Inference in Regression Analysis Math...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online