ps1_2010 - y 1 ;:::;y n ^ y n ) in matrix notation as a...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
STAT 203 PROBLEM SET 1 Due date: Jan. 21 (1) Let e 1 ;:::;e n be the residuals from the regression of y 1 ;:::;y n on x 1 ;:::;x n . (a) Show that P n i =1 e i = 0. (b) One of the assumptions for simple least squares regression is the following: The errors ± i are independent and identically dis- tributed, with mean 0 and variance ² 2 . Does P n i =1 e i = 0 help validate the above assumption? Why or why not? (2) Consider least squares linear regression of ( y 1 ;:::;n n ) on ( x 1 ;:::;x n ) by the model: y i ± N ( ³ 0 + ³ 1 x i 2 ) ; where the y ’s are assumed to be independent. (a) Express the least squares estimator ^ ³ and the residual vector ( y 1 ² ^
Background image of page 1
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: y 1 ;:::;y n ^ y n ) in matrix notation as a linear transformation of y . (b) For y N ( ; ), u = Ay , v = By , the covariance between u and v is A B t : Use this property to show that r is independent of ^ . (c) Show that n X i =1 ( y i ^ y ) 2 2 n 2 : (3) RABE Exercise 3.4 (Data le: Examination.txt) (4) RABE Exercise 3.14 (Data le: Cigarette.txt) (5) RABE Exercise 4.7 (Data le: Cigarette.txt) RABE: Regression Analysis by Example by Chatterjee and Hadi, Ed. 4. 1...
View Full Document

This note was uploaded on 04/25/2010 for the course MATH 30 taught by Professor Karaali during the Spring '08 term at Pomona College.

Ask a homework question - tutors are online